00:00:00.001 Started by upstream project "autotest-per-patch" build number 124201 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.101 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.102 The recommended git tool is: git 00:00:00.102 using credential 00000000-0000-0000-0000-000000000002 00:00:00.103 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.138 Fetching changes from the remote Git repository 00:00:00.140 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.172 Using shallow fetch with depth 1 00:00:00.172 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.172 > git --version # timeout=10 00:00:00.206 > git --version # 'git version 2.39.2' 00:00:00.206 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.233 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.233 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.840 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.851 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.863 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:06.863 > git config core.sparsecheckout # timeout=10 00:00:06.875 > git read-tree -mu HEAD # timeout=10 00:00:06.890 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:06.909 Commit message: "pool: fixes for VisualBuild class" 00:00:06.909 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:06.999 [Pipeline] Start of Pipeline 00:00:07.011 [Pipeline] library 00:00:07.012 Loading library shm_lib@master 00:00:07.013 Library shm_lib@master is cached. Copying from home. 00:00:07.029 [Pipeline] node 00:00:07.039 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.041 [Pipeline] { 00:00:07.048 [Pipeline] catchError 00:00:07.049 [Pipeline] { 00:00:07.060 [Pipeline] wrap 00:00:07.070 [Pipeline] { 00:00:07.075 [Pipeline] stage 00:00:07.076 [Pipeline] { (Prologue) 00:00:07.089 [Pipeline] echo 00:00:07.090 Node: VM-host-SM9 00:00:07.094 [Pipeline] cleanWs 00:00:07.101 [WS-CLEANUP] Deleting project workspace... 00:00:07.101 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.107 [WS-CLEANUP] done 00:00:07.281 [Pipeline] setCustomBuildProperty 00:00:07.333 [Pipeline] nodesByLabel 00:00:07.334 Found a total of 2 nodes with the 'sorcerer' label 00:00:07.341 [Pipeline] httpRequest 00:00:07.345 HttpMethod: GET 00:00:07.346 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.346 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.358 Response Code: HTTP/1.1 200 OK 00:00:07.359 Success: Status code 200 is in the accepted range: 200,404 00:00:07.359 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:15.142 [Pipeline] sh 00:00:15.423 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:15.441 [Pipeline] httpRequest 00:00:15.445 HttpMethod: GET 00:00:15.446 URL: http://10.211.164.101/packages/spdk_0a5aebcde18f5ee4c9dba0f68189ed0c7ac9f3cf.tar.gz 00:00:15.446 Sending request to url: http://10.211.164.101/packages/spdk_0a5aebcde18f5ee4c9dba0f68189ed0c7ac9f3cf.tar.gz 00:00:15.459 Response Code: HTTP/1.1 200 OK 00:00:15.459 Success: Status code 200 is in the accepted range: 200,404 00:00:15.460 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_0a5aebcde18f5ee4c9dba0f68189ed0c7ac9f3cf.tar.gz 00:01:02.017 [Pipeline] sh 00:01:02.294 + tar --no-same-owner -xf spdk_0a5aebcde18f5ee4c9dba0f68189ed0c7ac9f3cf.tar.gz 00:01:05.620 [Pipeline] sh 00:01:05.901 + git -C spdk log --oneline -n5 00:01:05.901 0a5aebcde go/rpc: Initial implementation of rpc call generator 00:01:05.901 8b1e208cc python/rpc: Python rpc docs generator. 00:01:05.901 98215362c python/rpc: Replace jsonrpc.md with generated docs 00:01:05.901 43217a125 python/rpc: Python rpc call generator. 00:01:05.901 902020273 python/rpc: Replace bdev.py with generated rpc's 00:01:05.921 [Pipeline] writeFile 00:01:05.938 [Pipeline] sh 00:01:06.220 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:06.231 [Pipeline] sh 00:01:06.508 + cat autorun-spdk.conf 00:01:06.508 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.508 SPDK_TEST_NVME=1 00:01:06.508 SPDK_TEST_FTL=1 00:01:06.508 SPDK_TEST_ISAL=1 00:01:06.508 SPDK_RUN_ASAN=1 00:01:06.508 SPDK_RUN_UBSAN=1 00:01:06.508 SPDK_TEST_XNVME=1 00:01:06.508 SPDK_TEST_NVME_FDP=1 00:01:06.508 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:06.515 RUN_NIGHTLY=0 00:01:06.517 [Pipeline] } 00:01:06.537 [Pipeline] // stage 00:01:06.551 [Pipeline] stage 00:01:06.553 [Pipeline] { (Run VM) 00:01:06.569 [Pipeline] sh 00:01:06.851 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:06.851 + echo 'Start stage prepare_nvme.sh' 00:01:06.851 Start stage prepare_nvme.sh 00:01:06.851 + [[ -n 4 ]] 00:01:06.851 + disk_prefix=ex4 00:01:06.851 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:06.851 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:06.851 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:06.851 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.851 ++ SPDK_TEST_NVME=1 00:01:06.851 ++ SPDK_TEST_FTL=1 00:01:06.851 ++ SPDK_TEST_ISAL=1 00:01:06.851 ++ SPDK_RUN_ASAN=1 00:01:06.851 ++ SPDK_RUN_UBSAN=1 00:01:06.851 ++ SPDK_TEST_XNVME=1 00:01:06.851 ++ SPDK_TEST_NVME_FDP=1 00:01:06.851 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:06.851 ++ RUN_NIGHTLY=0 00:01:06.851 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:06.851 + nvme_files=() 00:01:06.851 + declare -A nvme_files 00:01:06.851 + backend_dir=/var/lib/libvirt/images/backends 00:01:06.851 + nvme_files['nvme.img']=5G 00:01:06.851 + nvme_files['nvme-cmb.img']=5G 00:01:06.851 + nvme_files['nvme-multi0.img']=4G 00:01:06.851 + nvme_files['nvme-multi1.img']=4G 00:01:06.851 + nvme_files['nvme-multi2.img']=4G 00:01:06.851 + nvme_files['nvme-openstack.img']=8G 00:01:06.851 + nvme_files['nvme-zns.img']=5G 00:01:06.851 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:06.851 + (( SPDK_TEST_FTL == 1 )) 00:01:06.851 + nvme_files["nvme-ftl.img"]=6G 00:01:06.851 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:06.851 + nvme_files["nvme-fdp.img"]=1G 00:01:06.851 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:06.851 + for nvme in "${!nvme_files[@]}" 00:01:06.851 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:06.851 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:06.851 + for nvme in "${!nvme_files[@]}" 00:01:06.851 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:01:06.851 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:06.851 + for nvme in "${!nvme_files[@]}" 00:01:06.851 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:07.110 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:07.110 + for nvme in "${!nvme_files[@]}" 00:01:07.110 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:07.110 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:07.110 + for nvme in "${!nvme_files[@]}" 00:01:07.110 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:07.110 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:07.110 + for nvme in "${!nvme_files[@]}" 00:01:07.110 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:07.110 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:07.110 + for nvme in "${!nvme_files[@]}" 00:01:07.110 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:07.369 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:07.369 + for nvme in "${!nvme_files[@]}" 00:01:07.369 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:01:07.369 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:07.369 + for nvme in "${!nvme_files[@]}" 00:01:07.369 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:07.369 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:07.369 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:07.369 + echo 'End stage prepare_nvme.sh' 00:01:07.369 End stage prepare_nvme.sh 00:01:07.383 [Pipeline] sh 00:01:07.668 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:07.668 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:07.927 00:01:07.927 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:07.927 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:07.927 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:07.927 HELP=0 00:01:07.927 DRY_RUN=0 00:01:07.928 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:01:07.928 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:07.928 NVME_AUTO_CREATE=0 00:01:07.928 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:01:07.928 NVME_CMB=,,,, 00:01:07.928 NVME_PMR=,,,, 00:01:07.928 NVME_ZNS=,,,, 00:01:07.928 NVME_MS=true,,,, 00:01:07.928 NVME_FDP=,,,on, 00:01:07.928 SPDK_VAGRANT_DISTRO=fedora38 00:01:07.928 SPDK_VAGRANT_VMCPU=10 00:01:07.928 SPDK_VAGRANT_VMRAM=12288 00:01:07.928 SPDK_VAGRANT_PROVIDER=libvirt 00:01:07.928 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:07.928 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:07.928 SPDK_OPENSTACK_NETWORK=0 00:01:07.928 VAGRANT_PACKAGE_BOX=0 00:01:07.928 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:07.928 FORCE_DISTRO=true 00:01:07.928 VAGRANT_BOX_VERSION= 00:01:07.928 EXTRA_VAGRANTFILES= 00:01:07.928 NIC_MODEL=e1000 00:01:07.928 00:01:07.928 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:01:07.928 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:11.212 Bringing machine 'default' up with 'libvirt' provider... 00:01:11.780 ==> default: Creating image (snapshot of base box volume). 00:01:12.040 ==> default: Creating domain with the following settings... 00:01:12.040 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1718013001_cf73b677e69842f5437a 00:01:12.040 ==> default: -- Domain type: kvm 00:01:12.040 ==> default: -- Cpus: 10 00:01:12.040 ==> default: -- Feature: acpi 00:01:12.040 ==> default: -- Feature: apic 00:01:12.040 ==> default: -- Feature: pae 00:01:12.040 ==> default: -- Memory: 12288M 00:01:12.040 ==> default: -- Memory Backing: hugepages: 00:01:12.040 ==> default: -- Management MAC: 00:01:12.040 ==> default: -- Loader: 00:01:12.040 ==> default: -- Nvram: 00:01:12.040 ==> default: -- Base box: spdk/fedora38 00:01:12.040 ==> default: -- Storage pool: default 00:01:12.040 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1718013001_cf73b677e69842f5437a.img (20G) 00:01:12.040 ==> default: -- Volume Cache: default 00:01:12.040 ==> default: -- Kernel: 00:01:12.040 ==> default: -- Initrd: 00:01:12.040 ==> default: -- Graphics Type: vnc 00:01:12.040 ==> default: -- Graphics Port: -1 00:01:12.040 ==> default: -- Graphics IP: 127.0.0.1 00:01:12.040 ==> default: -- Graphics Password: Not defined 00:01:12.040 ==> default: -- Video Type: cirrus 00:01:12.040 ==> default: -- Video VRAM: 9216 00:01:12.040 ==> default: -- Sound Type: 00:01:12.040 ==> default: -- Keymap: en-us 00:01:12.040 ==> default: -- TPM Path: 00:01:12.040 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:12.040 ==> default: -- Command line args: 00:01:12.040 ==> default: -> value=-device, 00:01:12.040 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:12.040 ==> default: -> value=-drive, 00:01:12.040 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:12.040 ==> default: -> value=-device, 00:01:12.040 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:12.040 ==> default: -> value=-device, 00:01:12.040 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:12.040 ==> default: -> value=-drive, 00:01:12.040 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:01:12.040 ==> default: -> value=-device, 00:01:12.040 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.040 ==> default: -> value=-device, 00:01:12.040 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:12.040 ==> default: -> value=-drive, 00:01:12.040 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:12.040 ==> default: -> value=-device, 00:01:12.040 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.040 ==> default: -> value=-drive, 00:01:12.040 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:12.040 ==> default: -> value=-device, 00:01:12.040 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.040 ==> default: -> value=-drive, 00:01:12.040 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:12.040 ==> default: -> value=-device, 00:01:12.040 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.040 ==> default: -> value=-device, 00:01:12.040 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:12.040 ==> default: -> value=-device, 00:01:12.040 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:12.040 ==> default: -> value=-drive, 00:01:12.040 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:12.040 ==> default: -> value=-device, 00:01:12.040 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.040 ==> default: Creating shared folders metadata... 00:01:12.040 ==> default: Starting domain. 00:01:13.419 ==> default: Waiting for domain to get an IP address... 00:01:31.502 ==> default: Waiting for SSH to become available... 00:01:31.502 ==> default: Configuring and enabling network interfaces... 00:01:34.055 default: SSH address: 192.168.121.227:22 00:01:34.055 default: SSH username: vagrant 00:01:34.055 default: SSH auth method: private key 00:01:36.585 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:44.712 ==> default: Mounting SSHFS shared folder... 00:01:45.278 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:45.278 ==> default: Checking Mount.. 00:01:46.654 ==> default: Folder Successfully Mounted! 00:01:46.654 ==> default: Running provisioner: file... 00:01:47.223 default: ~/.gitconfig => .gitconfig 00:01:47.792 00:01:47.792 SUCCESS! 00:01:47.792 00:01:47.792 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:47.792 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:47.792 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:47.792 00:01:47.802 [Pipeline] } 00:01:47.822 [Pipeline] // stage 00:01:47.833 [Pipeline] dir 00:01:47.834 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:01:47.836 [Pipeline] { 00:01:47.851 [Pipeline] catchError 00:01:47.853 [Pipeline] { 00:01:47.869 [Pipeline] sh 00:01:48.184 + vagrant ssh-config --host vagrant 00:01:48.184 + sed -ne /^Host/,$p 00:01:48.184 + tee ssh_conf 00:01:52.376 Host vagrant 00:01:52.376 HostName 192.168.121.227 00:01:52.376 User vagrant 00:01:52.376 Port 22 00:01:52.376 UserKnownHostsFile /dev/null 00:01:52.376 StrictHostKeyChecking no 00:01:52.376 PasswordAuthentication no 00:01:52.376 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:52.376 IdentitiesOnly yes 00:01:52.376 LogLevel FATAL 00:01:52.376 ForwardAgent yes 00:01:52.376 ForwardX11 yes 00:01:52.376 00:01:52.391 [Pipeline] withEnv 00:01:52.394 [Pipeline] { 00:01:52.410 [Pipeline] sh 00:01:52.691 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:52.691 source /etc/os-release 00:01:52.691 [[ -e /image.version ]] && img=$(< /image.version) 00:01:52.691 # Minimal, systemd-like check. 00:01:52.691 if [[ -e /.dockerenv ]]; then 00:01:52.691 # Clear garbage from the node's name: 00:01:52.691 # agt-er_autotest_547-896 -> autotest_547-896 00:01:52.691 # $HOSTNAME is the actual container id 00:01:52.691 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:52.691 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:52.691 # We can assume this is a mount from a host where container is running, 00:01:52.691 # so fetch its hostname to easily identify the target swarm worker. 00:01:52.691 container="$(< /etc/hostname) ($agent)" 00:01:52.691 else 00:01:52.691 # Fallback 00:01:52.691 container=$agent 00:01:52.691 fi 00:01:52.691 fi 00:01:52.691 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:52.691 00:01:52.702 [Pipeline] } 00:01:52.720 [Pipeline] // withEnv 00:01:52.730 [Pipeline] setCustomBuildProperty 00:01:52.746 [Pipeline] stage 00:01:52.749 [Pipeline] { (Tests) 00:01:52.766 [Pipeline] sh 00:01:53.067 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:53.081 [Pipeline] sh 00:01:53.361 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:53.634 [Pipeline] timeout 00:01:53.635 Timeout set to expire in 40 min 00:01:53.637 [Pipeline] { 00:01:53.654 [Pipeline] sh 00:01:53.935 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:54.502 HEAD is now at 0a5aebcde go/rpc: Initial implementation of rpc call generator 00:01:54.517 [Pipeline] sh 00:01:54.797 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:55.070 [Pipeline] sh 00:01:55.350 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:55.625 [Pipeline] sh 00:01:55.906 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:55.906 ++ readlink -f spdk_repo 00:01:55.906 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:55.906 + [[ -n /home/vagrant/spdk_repo ]] 00:01:55.906 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:55.906 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:55.906 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:55.906 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:55.906 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:55.906 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:55.906 + cd /home/vagrant/spdk_repo 00:01:55.906 + source /etc/os-release 00:01:55.906 ++ NAME='Fedora Linux' 00:01:55.906 ++ VERSION='38 (Cloud Edition)' 00:01:55.906 ++ ID=fedora 00:01:55.906 ++ VERSION_ID=38 00:01:55.906 ++ VERSION_CODENAME= 00:01:55.906 ++ PLATFORM_ID=platform:f38 00:01:55.906 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:55.906 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:55.906 ++ LOGO=fedora-logo-icon 00:01:55.906 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:55.906 ++ HOME_URL=https://fedoraproject.org/ 00:01:55.906 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:55.906 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:55.906 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:55.906 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:55.906 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:55.906 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:55.906 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:55.906 ++ SUPPORT_END=2024-05-14 00:01:55.906 ++ VARIANT='Cloud Edition' 00:01:55.906 ++ VARIANT_ID=cloud 00:01:55.906 + uname -a 00:01:55.906 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:55.906 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:56.474 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:56.732 Hugepages 00:01:56.732 node hugesize free / total 00:01:56.732 node0 1048576kB 0 / 0 00:01:56.732 node0 2048kB 0 / 0 00:01:56.732 00:01:56.732 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:56.732 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:56.732 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:56.732 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:56.732 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:56.732 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:56.732 + rm -f /tmp/spdk-ld-path 00:01:56.732 + source autorun-spdk.conf 00:01:56.732 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.732 ++ SPDK_TEST_NVME=1 00:01:56.732 ++ SPDK_TEST_FTL=1 00:01:56.732 ++ SPDK_TEST_ISAL=1 00:01:56.732 ++ SPDK_RUN_ASAN=1 00:01:56.732 ++ SPDK_RUN_UBSAN=1 00:01:56.732 ++ SPDK_TEST_XNVME=1 00:01:56.732 ++ SPDK_TEST_NVME_FDP=1 00:01:56.732 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:56.732 ++ RUN_NIGHTLY=0 00:01:56.732 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:56.732 + [[ -n '' ]] 00:01:56.732 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:56.732 + for M in /var/spdk/build-*-manifest.txt 00:01:56.732 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:56.732 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:56.732 + for M in /var/spdk/build-*-manifest.txt 00:01:56.732 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:56.732 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:56.991 ++ uname 00:01:56.991 + [[ Linux == \L\i\n\u\x ]] 00:01:56.991 + sudo dmesg -T 00:01:56.991 + sudo dmesg --clear 00:01:56.991 + dmesg_pid=5203 00:01:56.991 + [[ Fedora Linux == FreeBSD ]] 00:01:56.991 + sudo dmesg -Tw 00:01:56.991 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:56.991 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:56.991 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:56.991 + [[ -x /usr/src/fio-static/fio ]] 00:01:56.991 + export FIO_BIN=/usr/src/fio-static/fio 00:01:56.991 + FIO_BIN=/usr/src/fio-static/fio 00:01:56.991 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:56.991 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:56.991 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:56.992 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:56.992 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:56.992 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:56.992 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:56.992 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:56.992 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:56.992 Test configuration: 00:01:56.992 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.992 SPDK_TEST_NVME=1 00:01:56.992 SPDK_TEST_FTL=1 00:01:56.992 SPDK_TEST_ISAL=1 00:01:56.992 SPDK_RUN_ASAN=1 00:01:56.992 SPDK_RUN_UBSAN=1 00:01:56.992 SPDK_TEST_XNVME=1 00:01:56.992 SPDK_TEST_NVME_FDP=1 00:01:56.992 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:56.992 RUN_NIGHTLY=0 09:50:46 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:56.992 09:50:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:56.992 09:50:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:56.992 09:50:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:56.992 09:50:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.992 09:50:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.992 09:50:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.992 09:50:46 -- paths/export.sh@5 -- $ export PATH 00:01:56.992 09:50:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.992 09:50:46 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:56.992 09:50:46 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:56.992 09:50:46 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718013046.XXXXXX 00:01:56.992 09:50:46 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718013046.D33PSA 00:01:56.992 09:50:46 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:56.992 09:50:46 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:56.992 09:50:46 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:56.992 09:50:46 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:56.992 09:50:46 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:56.992 09:50:46 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:56.992 09:50:46 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:56.992 09:50:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.992 09:50:46 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:56.992 09:50:46 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:56.992 09:50:46 -- pm/common@17 -- $ local monitor 00:01:56.992 09:50:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.992 09:50:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.992 09:50:46 -- pm/common@25 -- $ sleep 1 00:01:56.992 09:50:46 -- pm/common@21 -- $ date +%s 00:01:56.992 09:50:46 -- pm/common@21 -- $ date +%s 00:01:56.992 09:50:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1718013046 00:01:56.992 09:50:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1718013046 00:01:56.992 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1718013046_collect-vmstat.pm.log 00:01:56.992 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1718013046_collect-cpu-load.pm.log 00:01:57.251 Traceback (most recent call last): 00:01:57.251 File "/home/vagrant/spdk_repo/spdk/scripts/rpc.py", line 24, in 00:01:57.251 import spdk.rpc as rpc # noqa 00:01:57.251 ^^^^^^^^^^^^^^^^^^^^^^ 00:01:57.251 File "/home/vagrant/spdk_repo/spdk/python/spdk/rpc/__init__.py", line 13, in 00:01:57.251 from . import bdev 00:01:57.251 File "/home/vagrant/spdk_repo/spdk/python/spdk/rpc/bdev.py", line 6, in 00:01:57.251 from spdk.rpc.rpc import * 00:01:57.251 ModuleNotFoundError: No module named 'spdk.rpc.rpc' 00:01:58.187 09:50:47 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:58.187 09:50:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:58.187 09:50:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:58.187 09:50:47 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:58.187 09:50:47 -- spdk/autobuild.sh@16 -- $ date -u 00:01:58.187 Mon Jun 10 09:50:47 AM UTC 2024 00:01:58.187 09:50:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:58.187 v24.09-pre-63-g0a5aebcde 00:01:58.187 09:50:47 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:58.187 09:50:47 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:58.187 09:50:47 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:58.187 09:50:47 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:58.187 09:50:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.187 ************************************ 00:01:58.187 START TEST asan 00:01:58.187 ************************************ 00:01:58.187 using asan 00:01:58.187 09:50:47 asan -- common/autotest_common.sh@1124 -- $ echo 'using asan' 00:01:58.187 00:01:58.187 real 0m0.000s 00:01:58.187 user 0m0.000s 00:01:58.187 sys 0m0.000s 00:01:58.187 09:50:47 asan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:58.187 09:50:47 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:58.187 ************************************ 00:01:58.187 END TEST asan 00:01:58.188 ************************************ 00:01:58.188 09:50:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:58.188 09:50:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:58.188 09:50:47 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:58.188 09:50:47 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:58.188 09:50:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.188 ************************************ 00:01:58.188 START TEST ubsan 00:01:58.188 ************************************ 00:01:58.188 using ubsan 00:01:58.188 09:50:47 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:01:58.188 00:01:58.188 real 0m0.000s 00:01:58.188 user 0m0.000s 00:01:58.188 sys 0m0.000s 00:01:58.188 09:50:47 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:58.188 09:50:47 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:58.188 ************************************ 00:01:58.188 END TEST ubsan 00:01:58.188 ************************************ 00:01:58.188 09:50:47 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:58.188 09:50:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:58.188 09:50:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:58.188 09:50:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:58.188 09:50:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:58.188 09:50:47 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:58.188 09:50:47 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:58.188 09:50:47 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:58.188 09:50:47 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:58.188 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:58.188 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:58.755 Using 'verbs' RDMA provider 00:02:14.572 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:26.875 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:26.875 Creating mk/config.mk...done. 00:02:26.875 Creating mk/cc.flags.mk...done. 00:02:26.875 Type 'make' to build. 00:02:26.875 09:51:14 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:26.875 09:51:14 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:02:26.875 09:51:14 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:26.875 09:51:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.875 ************************************ 00:02:26.875 START TEST make 00:02:26.875 ************************************ 00:02:26.875 09:51:14 make -- common/autotest_common.sh@1124 -- $ make -j10 00:02:26.875 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:26.875 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:26.875 meson setup builddir \ 00:02:26.875 -Dwith-libaio=enabled \ 00:02:26.875 -Dwith-liburing=enabled \ 00:02:26.875 -Dwith-libvfn=disabled \ 00:02:26.875 -Dwith-spdk=false && \ 00:02:26.875 meson compile -C builddir && \ 00:02:26.875 cd -) 00:02:28.774 The Meson build system 00:02:28.774 Version: 1.3.1 00:02:28.774 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:28.774 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:28.774 Build type: native build 00:02:28.774 Project name: xnvme 00:02:28.774 Project version: 0.7.3 00:02:28.774 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:28.774 C linker for the host machine: cc ld.bfd 2.39-16 00:02:28.774 Host machine cpu family: x86_64 00:02:28.774 Host machine cpu: x86_64 00:02:28.774 Message: host_machine.system: linux 00:02:28.774 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:28.774 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:28.774 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:28.774 Run-time dependency threads found: YES 00:02:28.774 Has header "setupapi.h" : NO 00:02:28.774 Has header "linux/blkzoned.h" : YES 00:02:28.774 Has header "linux/blkzoned.h" : YES (cached) 00:02:28.774 Has header "libaio.h" : YES 00:02:28.774 Library aio found: YES 00:02:28.774 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:28.774 Run-time dependency liburing found: YES 2.2 00:02:28.774 Dependency libvfn skipped: feature with-libvfn disabled 00:02:28.774 Run-time dependency appleframeworks found: NO (tried framework) 00:02:28.774 Run-time dependency appleframeworks found: NO (tried framework) 00:02:28.774 Configuring xnvme_config.h using configuration 00:02:28.774 Configuring xnvme.spec using configuration 00:02:28.774 Run-time dependency bash-completion found: YES 2.11 00:02:28.774 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:28.774 Program cp found: YES (/usr/bin/cp) 00:02:28.774 Has header "winsock2.h" : NO 00:02:28.774 Has header "dbghelp.h" : NO 00:02:28.774 Library rpcrt4 found: NO 00:02:28.774 Library rt found: YES 00:02:28.774 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:28.774 Found CMake: /usr/bin/cmake (3.27.7) 00:02:28.774 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:28.774 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:28.774 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:28.774 Build targets in project: 32 00:02:28.774 00:02:28.774 xnvme 0.7.3 00:02:28.774 00:02:28.774 User defined options 00:02:28.774 with-libaio : enabled 00:02:28.774 with-liburing: enabled 00:02:28.774 with-libvfn : disabled 00:02:28.774 with-spdk : false 00:02:28.774 00:02:28.774 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:29.340 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:29.340 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:29.340 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:29.340 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:29.340 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:29.340 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:29.340 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:29.340 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:29.340 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:29.340 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:29.340 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:29.340 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:29.340 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:29.340 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:29.599 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:29.599 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:29.599 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:29.599 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:29.599 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:29.599 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:29.599 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:29.599 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:29.599 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:29.599 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:29.599 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:29.599 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:29.599 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:29.599 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:29.599 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:29.599 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:29.599 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:29.599 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:29.599 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:29.599 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:29.599 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:29.599 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:29.599 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:29.858 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:29.858 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:29.858 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:29.858 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:29.858 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:29.858 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:29.858 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:29.858 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:29.858 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:29.858 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:29.858 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:29.858 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:29.858 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:29.858 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:29.858 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:29.858 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:29.858 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:29.858 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:29.858 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:29.858 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:29.858 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:29.858 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:29.858 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:29.858 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:29.858 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:30.117 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:30.117 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:30.117 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:30.117 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:30.117 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:30.117 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:30.117 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:30.117 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:30.117 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:30.117 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:30.117 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:30.376 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:30.376 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:30.376 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:30.376 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:30.376 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:30.376 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:30.376 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:30.376 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:30.376 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:30.376 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:30.376 [83/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:30.635 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:30.635 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:30.635 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:30.635 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:30.635 [88/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:30.635 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:30.635 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:30.635 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:30.635 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:30.635 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:30.635 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:30.635 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:30.635 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:30.635 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:30.635 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:30.635 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:30.635 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:30.635 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:30.635 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:30.635 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:30.635 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:30.635 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:30.635 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:30.635 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:30.893 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:30.893 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:30.893 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:30.893 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:30.893 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:30.893 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:30.893 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:30.893 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:30.893 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:30.893 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:30.893 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:30.893 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:30.893 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:30.893 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:30.893 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:30.893 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:30.893 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:30.893 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:30.893 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:30.893 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:30.893 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:30.893 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:31.151 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:31.151 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:31.151 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:31.151 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:31.151 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:31.151 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:31.151 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:31.151 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:31.151 [138/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:31.151 [139/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:31.151 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:31.410 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:31.410 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:31.410 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:31.410 [144/203] Linking target lib/libxnvme.so 00:02:31.410 [145/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:31.410 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:31.410 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:31.410 [148/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:31.410 [149/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:31.667 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:31.667 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:31.667 [152/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:31.667 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:31.667 [154/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:31.667 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:31.667 [156/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:31.667 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:31.667 [158/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:31.667 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:31.928 [160/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:31.928 [161/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:31.928 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:31.928 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:31.928 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:31.928 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:31.928 [166/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:31.928 [167/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:31.928 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:32.186 [169/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:32.186 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:32.186 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:32.186 [172/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:32.186 [173/203] Linking static target lib/libxnvme.a 00:02:32.186 [174/203] Linking target tests/xnvme_tests_enum 00:02:32.444 [175/203] Linking target tests/xnvme_tests_buf 00:02:32.444 [176/203] Linking target tests/xnvme_tests_async_intf 00:02:32.444 [177/203] Linking target tests/xnvme_tests_ioworker 00:02:32.444 [178/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:32.444 [179/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:32.444 [180/203] Linking target tests/xnvme_tests_cli 00:02:32.444 [181/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:32.444 [182/203] Linking target tests/xnvme_tests_scc 00:02:32.444 [183/203] Linking target tests/xnvme_tests_znd_append 00:02:32.444 [184/203] Linking target tests/xnvme_tests_xnvme_file 00:02:32.444 [185/203] Linking target tests/xnvme_tests_lblk 00:02:32.444 [186/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:32.444 [187/203] Linking target tests/xnvme_tests_znd_state 00:02:32.444 [188/203] Linking target tools/xdd 00:02:32.444 [189/203] Linking target tests/xnvme_tests_map 00:02:32.444 [190/203] Linking target tests/xnvme_tests_kvs 00:02:32.444 [191/203] Linking target tools/xnvme 00:02:32.444 [192/203] Linking target tools/zoned 00:02:32.444 [193/203] Linking target tools/xnvme_file 00:02:32.444 [194/203] Linking target tools/kvs 00:02:32.444 [195/203] Linking target examples/xnvme_enum 00:02:32.444 [196/203] Linking target examples/xnvme_dev 00:02:32.444 [197/203] Linking target examples/xnvme_hello 00:02:32.444 [198/203] Linking target tools/lblk 00:02:32.444 [199/203] Linking target examples/zoned_io_async 00:02:32.444 [200/203] Linking target examples/zoned_io_sync 00:02:32.444 [201/203] Linking target examples/xnvme_single_async 00:02:32.444 [202/203] Linking target examples/xnvme_io_async 00:02:32.444 [203/203] Linking target examples/xnvme_single_sync 00:02:32.444 INFO: autodetecting backend as ninja 00:02:32.444 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:32.702 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:42.669 The Meson build system 00:02:42.669 Version: 1.3.1 00:02:42.669 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:42.669 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:42.669 Build type: native build 00:02:42.669 Program cat found: YES (/usr/bin/cat) 00:02:42.669 Project name: DPDK 00:02:42.669 Project version: 24.03.0 00:02:42.669 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:42.669 C linker for the host machine: cc ld.bfd 2.39-16 00:02:42.669 Host machine cpu family: x86_64 00:02:42.669 Host machine cpu: x86_64 00:02:42.669 Message: ## Building in Developer Mode ## 00:02:42.669 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:42.669 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:42.669 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:42.669 Program python3 found: YES (/usr/bin/python3) 00:02:42.669 Program cat found: YES (/usr/bin/cat) 00:02:42.669 Compiler for C supports arguments -march=native: YES 00:02:42.669 Checking for size of "void *" : 8 00:02:42.669 Checking for size of "void *" : 8 (cached) 00:02:42.669 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:42.669 Library m found: YES 00:02:42.669 Library numa found: YES 00:02:42.669 Has header "numaif.h" : YES 00:02:42.669 Library fdt found: NO 00:02:42.669 Library execinfo found: NO 00:02:42.669 Has header "execinfo.h" : YES 00:02:42.669 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:42.669 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:42.669 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:42.669 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:42.669 Run-time dependency openssl found: YES 3.0.9 00:02:42.669 Run-time dependency libpcap found: YES 1.10.4 00:02:42.669 Has header "pcap.h" with dependency libpcap: YES 00:02:42.669 Compiler for C supports arguments -Wcast-qual: YES 00:02:42.669 Compiler for C supports arguments -Wdeprecated: YES 00:02:42.669 Compiler for C supports arguments -Wformat: YES 00:02:42.669 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:42.669 Compiler for C supports arguments -Wformat-security: NO 00:02:42.669 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:42.669 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:42.669 Compiler for C supports arguments -Wnested-externs: YES 00:02:42.669 Compiler for C supports arguments -Wold-style-definition: YES 00:02:42.669 Compiler for C supports arguments -Wpointer-arith: YES 00:02:42.669 Compiler for C supports arguments -Wsign-compare: YES 00:02:42.669 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:42.669 Compiler for C supports arguments -Wundef: YES 00:02:42.669 Compiler for C supports arguments -Wwrite-strings: YES 00:02:42.669 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:42.669 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:42.669 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:42.669 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:42.669 Program objdump found: YES (/usr/bin/objdump) 00:02:42.669 Compiler for C supports arguments -mavx512f: YES 00:02:42.669 Checking if "AVX512 checking" compiles: YES 00:02:42.669 Fetching value of define "__SSE4_2__" : 1 00:02:42.669 Fetching value of define "__AES__" : 1 00:02:42.669 Fetching value of define "__AVX__" : 1 00:02:42.669 Fetching value of define "__AVX2__" : 1 00:02:42.669 Fetching value of define "__AVX512BW__" : (undefined) 00:02:42.669 Fetching value of define "__AVX512CD__" : (undefined) 00:02:42.669 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:42.669 Fetching value of define "__AVX512F__" : (undefined) 00:02:42.669 Fetching value of define "__AVX512VL__" : (undefined) 00:02:42.669 Fetching value of define "__PCLMUL__" : 1 00:02:42.669 Fetching value of define "__RDRND__" : 1 00:02:42.669 Fetching value of define "__RDSEED__" : 1 00:02:42.669 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:42.669 Fetching value of define "__znver1__" : (undefined) 00:02:42.669 Fetching value of define "__znver2__" : (undefined) 00:02:42.669 Fetching value of define "__znver3__" : (undefined) 00:02:42.669 Fetching value of define "__znver4__" : (undefined) 00:02:42.669 Library asan found: YES 00:02:42.669 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:42.669 Message: lib/log: Defining dependency "log" 00:02:42.669 Message: lib/kvargs: Defining dependency "kvargs" 00:02:42.669 Message: lib/telemetry: Defining dependency "telemetry" 00:02:42.669 Library rt found: YES 00:02:42.669 Checking for function "getentropy" : NO 00:02:42.669 Message: lib/eal: Defining dependency "eal" 00:02:42.669 Message: lib/ring: Defining dependency "ring" 00:02:42.669 Message: lib/rcu: Defining dependency "rcu" 00:02:42.669 Message: lib/mempool: Defining dependency "mempool" 00:02:42.669 Message: lib/mbuf: Defining dependency "mbuf" 00:02:42.669 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:42.669 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.670 Compiler for C supports arguments -mpclmul: YES 00:02:42.670 Compiler for C supports arguments -maes: YES 00:02:42.670 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:42.670 Compiler for C supports arguments -mavx512bw: YES 00:02:42.670 Compiler for C supports arguments -mavx512dq: YES 00:02:42.670 Compiler for C supports arguments -mavx512vl: YES 00:02:42.670 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:42.670 Compiler for C supports arguments -mavx2: YES 00:02:42.670 Compiler for C supports arguments -mavx: YES 00:02:42.670 Message: lib/net: Defining dependency "net" 00:02:42.670 Message: lib/meter: Defining dependency "meter" 00:02:42.670 Message: lib/ethdev: Defining dependency "ethdev" 00:02:42.670 Message: lib/pci: Defining dependency "pci" 00:02:42.670 Message: lib/cmdline: Defining dependency "cmdline" 00:02:42.670 Message: lib/hash: Defining dependency "hash" 00:02:42.670 Message: lib/timer: Defining dependency "timer" 00:02:42.670 Message: lib/compressdev: Defining dependency "compressdev" 00:02:42.670 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:42.670 Message: lib/dmadev: Defining dependency "dmadev" 00:02:42.670 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:42.670 Message: lib/power: Defining dependency "power" 00:02:42.670 Message: lib/reorder: Defining dependency "reorder" 00:02:42.670 Message: lib/security: Defining dependency "security" 00:02:42.670 Has header "linux/userfaultfd.h" : YES 00:02:42.670 Has header "linux/vduse.h" : YES 00:02:42.670 Message: lib/vhost: Defining dependency "vhost" 00:02:42.670 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:42.670 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:42.670 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:42.670 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:42.670 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:42.670 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:42.670 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:42.670 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:42.670 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:42.670 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:42.670 Program doxygen found: YES (/usr/bin/doxygen) 00:02:42.670 Configuring doxy-api-html.conf using configuration 00:02:42.670 Configuring doxy-api-man.conf using configuration 00:02:42.670 Program mandb found: YES (/usr/bin/mandb) 00:02:42.670 Program sphinx-build found: NO 00:02:42.670 Configuring rte_build_config.h using configuration 00:02:42.670 Message: 00:02:42.670 ================= 00:02:42.670 Applications Enabled 00:02:42.670 ================= 00:02:42.670 00:02:42.670 apps: 00:02:42.670 00:02:42.670 00:02:42.670 Message: 00:02:42.670 ================= 00:02:42.670 Libraries Enabled 00:02:42.670 ================= 00:02:42.670 00:02:42.670 libs: 00:02:42.670 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:42.670 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:42.670 cryptodev, dmadev, power, reorder, security, vhost, 00:02:42.670 00:02:42.670 Message: 00:02:42.670 =============== 00:02:42.670 Drivers Enabled 00:02:42.670 =============== 00:02:42.670 00:02:42.670 common: 00:02:42.670 00:02:42.670 bus: 00:02:42.670 pci, vdev, 00:02:42.670 mempool: 00:02:42.670 ring, 00:02:42.670 dma: 00:02:42.670 00:02:42.670 net: 00:02:42.670 00:02:42.670 crypto: 00:02:42.670 00:02:42.670 compress: 00:02:42.670 00:02:42.670 vdpa: 00:02:42.670 00:02:42.670 00:02:42.670 Message: 00:02:42.670 ================= 00:02:42.670 Content Skipped 00:02:42.670 ================= 00:02:42.670 00:02:42.670 apps: 00:02:42.670 dumpcap: explicitly disabled via build config 00:02:42.670 graph: explicitly disabled via build config 00:02:42.670 pdump: explicitly disabled via build config 00:02:42.670 proc-info: explicitly disabled via build config 00:02:42.670 test-acl: explicitly disabled via build config 00:02:42.670 test-bbdev: explicitly disabled via build config 00:02:42.670 test-cmdline: explicitly disabled via build config 00:02:42.670 test-compress-perf: explicitly disabled via build config 00:02:42.670 test-crypto-perf: explicitly disabled via build config 00:02:42.670 test-dma-perf: explicitly disabled via build config 00:02:42.670 test-eventdev: explicitly disabled via build config 00:02:42.670 test-fib: explicitly disabled via build config 00:02:42.670 test-flow-perf: explicitly disabled via build config 00:02:42.670 test-gpudev: explicitly disabled via build config 00:02:42.670 test-mldev: explicitly disabled via build config 00:02:42.670 test-pipeline: explicitly disabled via build config 00:02:42.670 test-pmd: explicitly disabled via build config 00:02:42.670 test-regex: explicitly disabled via build config 00:02:42.670 test-sad: explicitly disabled via build config 00:02:42.670 test-security-perf: explicitly disabled via build config 00:02:42.670 00:02:42.670 libs: 00:02:42.670 argparse: explicitly disabled via build config 00:02:42.670 metrics: explicitly disabled via build config 00:02:42.670 acl: explicitly disabled via build config 00:02:42.670 bbdev: explicitly disabled via build config 00:02:42.670 bitratestats: explicitly disabled via build config 00:02:42.670 bpf: explicitly disabled via build config 00:02:42.670 cfgfile: explicitly disabled via build config 00:02:42.670 distributor: explicitly disabled via build config 00:02:42.670 efd: explicitly disabled via build config 00:02:42.670 eventdev: explicitly disabled via build config 00:02:42.670 dispatcher: explicitly disabled via build config 00:02:42.670 gpudev: explicitly disabled via build config 00:02:42.670 gro: explicitly disabled via build config 00:02:42.670 gso: explicitly disabled via build config 00:02:42.670 ip_frag: explicitly disabled via build config 00:02:42.670 jobstats: explicitly disabled via build config 00:02:42.670 latencystats: explicitly disabled via build config 00:02:42.670 lpm: explicitly disabled via build config 00:02:42.670 member: explicitly disabled via build config 00:02:42.670 pcapng: explicitly disabled via build config 00:02:42.670 rawdev: explicitly disabled via build config 00:02:42.670 regexdev: explicitly disabled via build config 00:02:42.670 mldev: explicitly disabled via build config 00:02:42.670 rib: explicitly disabled via build config 00:02:42.670 sched: explicitly disabled via build config 00:02:42.670 stack: explicitly disabled via build config 00:02:42.670 ipsec: explicitly disabled via build config 00:02:42.670 pdcp: explicitly disabled via build config 00:02:42.670 fib: explicitly disabled via build config 00:02:42.670 port: explicitly disabled via build config 00:02:42.670 pdump: explicitly disabled via build config 00:02:42.670 table: explicitly disabled via build config 00:02:42.670 pipeline: explicitly disabled via build config 00:02:42.670 graph: explicitly disabled via build config 00:02:42.670 node: explicitly disabled via build config 00:02:42.670 00:02:42.670 drivers: 00:02:42.670 common/cpt: not in enabled drivers build config 00:02:42.670 common/dpaax: not in enabled drivers build config 00:02:42.670 common/iavf: not in enabled drivers build config 00:02:42.670 common/idpf: not in enabled drivers build config 00:02:42.670 common/ionic: not in enabled drivers build config 00:02:42.670 common/mvep: not in enabled drivers build config 00:02:42.670 common/octeontx: not in enabled drivers build config 00:02:42.670 bus/auxiliary: not in enabled drivers build config 00:02:42.670 bus/cdx: not in enabled drivers build config 00:02:42.670 bus/dpaa: not in enabled drivers build config 00:02:42.670 bus/fslmc: not in enabled drivers build config 00:02:42.670 bus/ifpga: not in enabled drivers build config 00:02:42.670 bus/platform: not in enabled drivers build config 00:02:42.670 bus/uacce: not in enabled drivers build config 00:02:42.670 bus/vmbus: not in enabled drivers build config 00:02:42.670 common/cnxk: not in enabled drivers build config 00:02:42.670 common/mlx5: not in enabled drivers build config 00:02:42.670 common/nfp: not in enabled drivers build config 00:02:42.671 common/nitrox: not in enabled drivers build config 00:02:42.671 common/qat: not in enabled drivers build config 00:02:42.671 common/sfc_efx: not in enabled drivers build config 00:02:42.671 mempool/bucket: not in enabled drivers build config 00:02:42.671 mempool/cnxk: not in enabled drivers build config 00:02:42.671 mempool/dpaa: not in enabled drivers build config 00:02:42.671 mempool/dpaa2: not in enabled drivers build config 00:02:42.671 mempool/octeontx: not in enabled drivers build config 00:02:42.671 mempool/stack: not in enabled drivers build config 00:02:42.671 dma/cnxk: not in enabled drivers build config 00:02:42.671 dma/dpaa: not in enabled drivers build config 00:02:42.671 dma/dpaa2: not in enabled drivers build config 00:02:42.671 dma/hisilicon: not in enabled drivers build config 00:02:42.671 dma/idxd: not in enabled drivers build config 00:02:42.671 dma/ioat: not in enabled drivers build config 00:02:42.671 dma/skeleton: not in enabled drivers build config 00:02:42.671 net/af_packet: not in enabled drivers build config 00:02:42.671 net/af_xdp: not in enabled drivers build config 00:02:42.671 net/ark: not in enabled drivers build config 00:02:42.671 net/atlantic: not in enabled drivers build config 00:02:42.671 net/avp: not in enabled drivers build config 00:02:42.671 net/axgbe: not in enabled drivers build config 00:02:42.671 net/bnx2x: not in enabled drivers build config 00:02:42.671 net/bnxt: not in enabled drivers build config 00:02:42.671 net/bonding: not in enabled drivers build config 00:02:42.671 net/cnxk: not in enabled drivers build config 00:02:42.671 net/cpfl: not in enabled drivers build config 00:02:42.671 net/cxgbe: not in enabled drivers build config 00:02:42.671 net/dpaa: not in enabled drivers build config 00:02:42.671 net/dpaa2: not in enabled drivers build config 00:02:42.671 net/e1000: not in enabled drivers build config 00:02:42.671 net/ena: not in enabled drivers build config 00:02:42.671 net/enetc: not in enabled drivers build config 00:02:42.671 net/enetfec: not in enabled drivers build config 00:02:42.671 net/enic: not in enabled drivers build config 00:02:42.671 net/failsafe: not in enabled drivers build config 00:02:42.671 net/fm10k: not in enabled drivers build config 00:02:42.671 net/gve: not in enabled drivers build config 00:02:42.671 net/hinic: not in enabled drivers build config 00:02:42.671 net/hns3: not in enabled drivers build config 00:02:42.671 net/i40e: not in enabled drivers build config 00:02:42.671 net/iavf: not in enabled drivers build config 00:02:42.671 net/ice: not in enabled drivers build config 00:02:42.671 net/idpf: not in enabled drivers build config 00:02:42.671 net/igc: not in enabled drivers build config 00:02:42.671 net/ionic: not in enabled drivers build config 00:02:42.671 net/ipn3ke: not in enabled drivers build config 00:02:42.671 net/ixgbe: not in enabled drivers build config 00:02:42.671 net/mana: not in enabled drivers build config 00:02:42.671 net/memif: not in enabled drivers build config 00:02:42.671 net/mlx4: not in enabled drivers build config 00:02:42.671 net/mlx5: not in enabled drivers build config 00:02:42.671 net/mvneta: not in enabled drivers build config 00:02:42.671 net/mvpp2: not in enabled drivers build config 00:02:42.671 net/netvsc: not in enabled drivers build config 00:02:42.671 net/nfb: not in enabled drivers build config 00:02:42.671 net/nfp: not in enabled drivers build config 00:02:42.671 net/ngbe: not in enabled drivers build config 00:02:42.671 net/null: not in enabled drivers build config 00:02:42.671 net/octeontx: not in enabled drivers build config 00:02:42.671 net/octeon_ep: not in enabled drivers build config 00:02:42.671 net/pcap: not in enabled drivers build config 00:02:42.671 net/pfe: not in enabled drivers build config 00:02:42.671 net/qede: not in enabled drivers build config 00:02:42.671 net/ring: not in enabled drivers build config 00:02:42.671 net/sfc: not in enabled drivers build config 00:02:42.671 net/softnic: not in enabled drivers build config 00:02:42.671 net/tap: not in enabled drivers build config 00:02:42.671 net/thunderx: not in enabled drivers build config 00:02:42.671 net/txgbe: not in enabled drivers build config 00:02:42.671 net/vdev_netvsc: not in enabled drivers build config 00:02:42.671 net/vhost: not in enabled drivers build config 00:02:42.671 net/virtio: not in enabled drivers build config 00:02:42.671 net/vmxnet3: not in enabled drivers build config 00:02:42.671 raw/*: missing internal dependency, "rawdev" 00:02:42.671 crypto/armv8: not in enabled drivers build config 00:02:42.671 crypto/bcmfs: not in enabled drivers build config 00:02:42.671 crypto/caam_jr: not in enabled drivers build config 00:02:42.671 crypto/ccp: not in enabled drivers build config 00:02:42.671 crypto/cnxk: not in enabled drivers build config 00:02:42.671 crypto/dpaa_sec: not in enabled drivers build config 00:02:42.671 crypto/dpaa2_sec: not in enabled drivers build config 00:02:42.671 crypto/ipsec_mb: not in enabled drivers build config 00:02:42.671 crypto/mlx5: not in enabled drivers build config 00:02:42.671 crypto/mvsam: not in enabled drivers build config 00:02:42.671 crypto/nitrox: not in enabled drivers build config 00:02:42.671 crypto/null: not in enabled drivers build config 00:02:42.671 crypto/octeontx: not in enabled drivers build config 00:02:42.671 crypto/openssl: not in enabled drivers build config 00:02:42.671 crypto/scheduler: not in enabled drivers build config 00:02:42.671 crypto/uadk: not in enabled drivers build config 00:02:42.671 crypto/virtio: not in enabled drivers build config 00:02:42.671 compress/isal: not in enabled drivers build config 00:02:42.671 compress/mlx5: not in enabled drivers build config 00:02:42.671 compress/nitrox: not in enabled drivers build config 00:02:42.671 compress/octeontx: not in enabled drivers build config 00:02:42.671 compress/zlib: not in enabled drivers build config 00:02:42.671 regex/*: missing internal dependency, "regexdev" 00:02:42.671 ml/*: missing internal dependency, "mldev" 00:02:42.671 vdpa/ifc: not in enabled drivers build config 00:02:42.671 vdpa/mlx5: not in enabled drivers build config 00:02:42.671 vdpa/nfp: not in enabled drivers build config 00:02:42.671 vdpa/sfc: not in enabled drivers build config 00:02:42.671 event/*: missing internal dependency, "eventdev" 00:02:42.671 baseband/*: missing internal dependency, "bbdev" 00:02:42.671 gpu/*: missing internal dependency, "gpudev" 00:02:42.671 00:02:42.671 00:02:42.671 Build targets in project: 85 00:02:42.671 00:02:42.671 DPDK 24.03.0 00:02:42.671 00:02:42.671 User defined options 00:02:42.671 buildtype : debug 00:02:42.671 default_library : shared 00:02:42.671 libdir : lib 00:02:42.671 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:42.671 b_sanitize : address 00:02:42.671 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:42.671 c_link_args : 00:02:42.671 cpu_instruction_set: native 00:02:42.671 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:42.671 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:42.671 enable_docs : false 00:02:42.671 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:42.671 enable_kmods : false 00:02:42.671 tests : false 00:02:42.671 00:02:42.671 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:42.929 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:42.929 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:43.186 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:43.186 [3/268] Linking static target lib/librte_kvargs.a 00:02:43.186 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:43.186 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:43.186 [6/268] Linking static target lib/librte_log.a 00:02:43.750 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.750 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:44.007 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:44.265 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:44.265 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:44.265 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:44.265 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:44.265 [14/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.265 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:44.265 [16/268] Linking target lib/librte_log.so.24.1 00:02:44.265 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:44.522 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:44.522 [19/268] Linking static target lib/librte_telemetry.a 00:02:44.522 [20/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:44.779 [21/268] Linking target lib/librte_kvargs.so.24.1 00:02:44.779 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:45.037 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:45.037 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:45.037 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:45.295 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:45.295 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.295 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:45.295 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:45.295 [30/268] Linking target lib/librte_telemetry.so.24.1 00:02:45.552 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:45.552 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:45.552 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:45.809 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:45.809 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:46.067 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:46.324 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:46.581 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:46.581 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:46.581 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:46.838 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:46.838 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:46.838 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:46.838 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:46.838 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:46.838 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:46.838 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:47.404 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:47.404 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:47.662 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:47.662 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:47.920 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:47.920 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:48.178 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:48.178 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:48.436 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:48.436 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:48.436 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:48.693 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:48.693 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:48.951 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:48.952 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:48.952 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:48.952 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:49.210 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:49.468 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:49.468 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:49.726 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:49.984 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:49.984 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:49.984 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:50.242 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:50.242 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:50.242 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:50.242 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:50.242 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:50.500 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:50.758 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:50.759 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:51.017 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:51.276 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:51.276 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:51.276 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:51.562 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:51.562 [85/268] Linking static target lib/librte_eal.a 00:02:51.562 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:51.562 [87/268] Linking static target lib/librte_ring.a 00:02:51.819 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:52.078 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:52.078 [90/268] Linking static target lib/librte_rcu.a 00:02:52.078 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:52.336 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:52.336 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:52.336 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:52.336 [95/268] Linking static target lib/librte_mempool.a 00:02:52.594 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.594 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.851 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:52.851 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:53.109 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:53.109 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:53.675 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:53.675 [103/268] Linking static target lib/librte_mbuf.a 00:02:53.675 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:53.934 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:53.934 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:53.934 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:53.934 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:53.934 [109/268] Linking static target lib/librte_meter.a 00:02:53.934 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.192 [111/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:54.192 [112/268] Linking static target lib/librte_net.a 00:02:54.451 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.451 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:54.451 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:54.451 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.709 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:54.709 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:54.709 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.275 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:55.275 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:55.275 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:55.841 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:55.841 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:55.841 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:55.841 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:56.099 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:56.099 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:56.099 [129/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:56.099 [130/268] Linking static target lib/librte_pci.a 00:02:56.099 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:56.099 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:56.099 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:56.099 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:56.357 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:56.357 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:56.357 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:56.357 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:56.357 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:56.357 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:56.357 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:56.357 [142/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.357 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:56.615 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:56.615 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:56.873 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:56.873 [147/268] Linking static target lib/librte_cmdline.a 00:02:57.132 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:57.132 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:57.132 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:57.132 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:57.390 [152/268] Linking static target lib/librte_ethdev.a 00:02:57.390 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:57.390 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:57.390 [155/268] Linking static target lib/librte_timer.a 00:02:57.390 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:57.647 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:57.905 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:57.905 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:57.905 [160/268] Linking static target lib/librte_hash.a 00:02:58.162 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.162 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:58.162 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:58.162 [164/268] Linking static target lib/librte_compressdev.a 00:02:58.420 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:58.420 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:58.420 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:58.678 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.678 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:58.678 [170/268] Linking static target lib/librte_dmadev.a 00:02:58.678 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:58.935 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:59.210 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:59.210 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.210 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.210 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:59.468 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.468 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:59.726 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:59.726 [180/268] Linking static target lib/librte_cryptodev.a 00:02:59.726 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:59.726 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:59.726 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:59.726 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:00.002 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:00.002 [186/268] Linking static target lib/librte_power.a 00:03:00.002 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:00.002 [188/268] Linking static target lib/librte_reorder.a 00:03:00.278 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:00.536 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:00.536 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:00.536 [192/268] Linking static target lib/librte_security.a 00:03:00.536 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:00.795 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.795 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:01.054 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.054 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.313 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:01.313 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:01.571 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:01.571 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:01.571 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:01.830 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.830 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:01.830 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:01.830 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:02.089 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:02.089 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:02.089 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:02.347 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:02.347 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:02.347 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:02.347 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.347 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.347 [215/268] Linking static target drivers/librte_bus_vdev.a 00:03:02.347 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:02.605 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.605 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.605 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:02.605 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:02.605 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:02.862 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.862 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:02.862 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:02.862 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:02.862 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:03.121 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.379 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.379 [229/268] Linking target lib/librte_eal.so.24.1 00:03:03.638 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:03.638 [231/268] Linking target lib/librte_pci.so.24.1 00:03:03.638 [232/268] Linking target lib/librte_meter.so.24.1 00:03:03.638 [233/268] Linking target lib/librte_timer.so.24.1 00:03:03.638 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:03.638 [235/268] Linking target lib/librte_dmadev.so.24.1 00:03:03.638 [236/268] Linking target lib/librte_ring.so.24.1 00:03:03.638 [237/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:03.638 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:03.638 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:03.638 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:03.896 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:03.896 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:03.896 [243/268] Linking target lib/librte_rcu.so.24.1 00:03:03.896 [244/268] Linking target lib/librte_mempool.so.24.1 00:03:03.896 [245/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:03.896 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:03.896 [247/268] Linking target lib/librte_mbuf.so.24.1 00:03:03.896 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:04.156 [249/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:04.156 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:04.156 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:03:04.156 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:04.156 [253/268] Linking target lib/librte_net.so.24.1 00:03:04.156 [254/268] Linking target lib/librte_compressdev.so.24.1 00:03:04.415 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:04.415 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:04.415 [257/268] Linking target lib/librte_hash.so.24.1 00:03:04.415 [258/268] Linking target lib/librte_cmdline.so.24.1 00:03:04.415 [259/268] Linking target lib/librte_security.so.24.1 00:03:04.674 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:04.932 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.932 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:05.191 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:05.191 [264/268] Linking target lib/librte_power.so.24.1 00:03:07.728 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:07.728 [266/268] Linking static target lib/librte_vhost.a 00:03:09.630 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.631 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:09.631 INFO: autodetecting backend as ninja 00:03:09.631 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:11.006 CC lib/ut_mock/mock.o 00:03:11.006 CC lib/log/log.o 00:03:11.006 CC lib/log/log_flags.o 00:03:11.006 CC lib/log/log_deprecated.o 00:03:11.006 CC lib/ut/ut.o 00:03:11.006 LIB libspdk_log.a 00:03:11.006 LIB libspdk_ut_mock.a 00:03:11.006 SO libspdk_log.so.7.0 00:03:11.006 LIB libspdk_ut.a 00:03:11.006 SO libspdk_ut_mock.so.6.0 00:03:11.006 SO libspdk_ut.so.2.0 00:03:11.006 SYMLINK libspdk_log.so 00:03:11.265 SYMLINK libspdk_ut_mock.so 00:03:11.265 SYMLINK libspdk_ut.so 00:03:11.265 CC lib/ioat/ioat.o 00:03:11.265 CC lib/dma/dma.o 00:03:11.265 CC lib/util/base64.o 00:03:11.265 CC lib/util/bit_array.o 00:03:11.265 CC lib/util/cpuset.o 00:03:11.265 CC lib/util/crc16.o 00:03:11.265 CC lib/util/crc32.o 00:03:11.265 CXX lib/trace_parser/trace.o 00:03:11.265 CC lib/util/crc32c.o 00:03:11.523 CC lib/vfio_user/host/vfio_user_pci.o 00:03:11.523 CC lib/util/crc32_ieee.o 00:03:11.523 CC lib/vfio_user/host/vfio_user.o 00:03:11.523 CC lib/util/crc64.o 00:03:11.523 CC lib/util/dif.o 00:03:11.523 CC lib/util/fd.o 00:03:11.523 LIB libspdk_dma.a 00:03:11.523 SO libspdk_dma.so.4.0 00:03:11.523 CC lib/util/file.o 00:03:11.781 CC lib/util/hexlify.o 00:03:11.781 CC lib/util/iov.o 00:03:11.781 SYMLINK libspdk_dma.so 00:03:11.781 CC lib/util/math.o 00:03:11.781 CC lib/util/pipe.o 00:03:11.781 LIB libspdk_ioat.a 00:03:11.781 SO libspdk_ioat.so.7.0 00:03:11.781 CC lib/util/strerror_tls.o 00:03:11.781 CC lib/util/string.o 00:03:11.781 CC lib/util/uuid.o 00:03:11.781 LIB libspdk_vfio_user.a 00:03:11.781 SYMLINK libspdk_ioat.so 00:03:11.781 CC lib/util/fd_group.o 00:03:11.781 SO libspdk_vfio_user.so.5.0 00:03:12.039 CC lib/util/xor.o 00:03:12.039 CC lib/util/zipf.o 00:03:12.039 SYMLINK libspdk_vfio_user.so 00:03:12.297 LIB libspdk_util.a 00:03:12.556 SO libspdk_util.so.9.0 00:03:12.556 SYMLINK libspdk_util.so 00:03:12.812 LIB libspdk_trace_parser.a 00:03:12.812 SO libspdk_trace_parser.so.5.0 00:03:12.812 CC lib/json/json_parse.o 00:03:12.812 CC lib/json/json_util.o 00:03:12.812 CC lib/json/json_write.o 00:03:12.812 CC lib/env_dpdk/env.o 00:03:12.812 CC lib/vmd/vmd.o 00:03:12.812 CC lib/env_dpdk/memory.o 00:03:12.812 CC lib/conf/conf.o 00:03:12.812 CC lib/idxd/idxd.o 00:03:12.812 CC lib/rdma/common.o 00:03:12.812 SYMLINK libspdk_trace_parser.so 00:03:12.812 CC lib/env_dpdk/pci.o 00:03:13.070 LIB libspdk_conf.a 00:03:13.070 CC lib/vmd/led.o 00:03:13.070 SO libspdk_conf.so.6.0 00:03:13.070 CC lib/env_dpdk/init.o 00:03:13.070 LIB libspdk_json.a 00:03:13.070 CC lib/rdma/rdma_verbs.o 00:03:13.070 SYMLINK libspdk_conf.so 00:03:13.328 CC lib/env_dpdk/threads.o 00:03:13.328 SO libspdk_json.so.6.0 00:03:13.328 CC lib/env_dpdk/pci_ioat.o 00:03:13.328 SYMLINK libspdk_json.so 00:03:13.328 CC lib/env_dpdk/pci_virtio.o 00:03:13.328 CC lib/env_dpdk/pci_vmd.o 00:03:13.328 CC lib/idxd/idxd_user.o 00:03:13.328 CC lib/idxd/idxd_kernel.o 00:03:13.328 CC lib/env_dpdk/pci_idxd.o 00:03:13.587 LIB libspdk_rdma.a 00:03:13.587 CC lib/env_dpdk/pci_event.o 00:03:13.587 SO libspdk_rdma.so.6.0 00:03:13.587 CC lib/env_dpdk/sigbus_handler.o 00:03:13.587 CC lib/env_dpdk/pci_dpdk.o 00:03:13.587 SYMLINK libspdk_rdma.so 00:03:13.587 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:13.587 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:13.587 LIB libspdk_idxd.a 00:03:13.587 LIB libspdk_vmd.a 00:03:13.587 SO libspdk_idxd.so.12.0 00:03:13.587 SO libspdk_vmd.so.6.0 00:03:13.845 SYMLINK libspdk_idxd.so 00:03:13.845 SYMLINK libspdk_vmd.so 00:03:13.845 CC lib/jsonrpc/jsonrpc_server.o 00:03:13.845 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:13.845 CC lib/jsonrpc/jsonrpc_client.o 00:03:13.845 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:14.102 LIB libspdk_jsonrpc.a 00:03:14.102 SO libspdk_jsonrpc.so.6.0 00:03:14.102 SYMLINK libspdk_jsonrpc.so 00:03:14.360 CC lib/rpc/rpc.o 00:03:14.624 LIB libspdk_env_dpdk.a 00:03:14.624 LIB libspdk_rpc.a 00:03:14.624 SO libspdk_rpc.so.6.0 00:03:14.938 SO libspdk_env_dpdk.so.14.1 00:03:14.938 SYMLINK libspdk_rpc.so 00:03:14.938 SYMLINK libspdk_env_dpdk.so 00:03:14.938 CC lib/trace/trace.o 00:03:14.938 CC lib/trace/trace_flags.o 00:03:14.938 CC lib/trace/trace_rpc.o 00:03:14.938 CC lib/keyring/keyring.o 00:03:14.938 CC lib/keyring/keyring_rpc.o 00:03:14.938 CC lib/notify/notify.o 00:03:14.938 CC lib/notify/notify_rpc.o 00:03:15.197 LIB libspdk_notify.a 00:03:15.197 SO libspdk_notify.so.6.0 00:03:15.197 LIB libspdk_keyring.a 00:03:15.455 SYMLINK libspdk_notify.so 00:03:15.455 LIB libspdk_trace.a 00:03:15.455 SO libspdk_keyring.so.1.0 00:03:15.455 SO libspdk_trace.so.10.0 00:03:15.455 SYMLINK libspdk_trace.so 00:03:15.455 SYMLINK libspdk_keyring.so 00:03:15.712 CC lib/thread/thread.o 00:03:15.712 CC lib/thread/iobuf.o 00:03:15.712 CC lib/sock/sock.o 00:03:15.712 CC lib/sock/sock_rpc.o 00:03:16.278 LIB libspdk_sock.a 00:03:16.278 SO libspdk_sock.so.9.0 00:03:16.278 SYMLINK libspdk_sock.so 00:03:16.536 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:16.536 CC lib/nvme/nvme_ctrlr.o 00:03:16.536 CC lib/nvme/nvme_fabric.o 00:03:16.536 CC lib/nvme/nvme_ns_cmd.o 00:03:16.536 CC lib/nvme/nvme_ns.o 00:03:16.536 CC lib/nvme/nvme_pcie_common.o 00:03:16.536 CC lib/nvme/nvme_pcie.o 00:03:16.536 CC lib/nvme/nvme_qpair.o 00:03:16.536 CC lib/nvme/nvme.o 00:03:17.470 CC lib/nvme/nvme_quirks.o 00:03:17.470 CC lib/nvme/nvme_transport.o 00:03:17.728 CC lib/nvme/nvme_discovery.o 00:03:17.728 LIB libspdk_thread.a 00:03:17.728 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:17.728 SO libspdk_thread.so.10.0 00:03:17.728 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:17.728 CC lib/nvme/nvme_tcp.o 00:03:17.728 CC lib/nvme/nvme_opal.o 00:03:17.728 SYMLINK libspdk_thread.so 00:03:17.728 CC lib/nvme/nvme_io_msg.o 00:03:17.986 CC lib/nvme/nvme_poll_group.o 00:03:18.244 CC lib/nvme/nvme_zns.o 00:03:18.502 CC lib/nvme/nvme_stubs.o 00:03:18.502 CC lib/nvme/nvme_auth.o 00:03:18.502 CC lib/accel/accel.o 00:03:18.502 CC lib/nvme/nvme_cuse.o 00:03:18.502 CC lib/nvme/nvme_rdma.o 00:03:18.760 CC lib/blob/blobstore.o 00:03:18.760 CC lib/init/json_config.o 00:03:18.760 CC lib/init/subsystem.o 00:03:19.018 CC lib/init/subsystem_rpc.o 00:03:19.018 CC lib/init/rpc.o 00:03:19.018 CC lib/blob/request.o 00:03:19.018 CC lib/blob/zeroes.o 00:03:19.276 LIB libspdk_init.a 00:03:19.276 SO libspdk_init.so.5.0 00:03:19.276 SYMLINK libspdk_init.so 00:03:19.276 CC lib/blob/blob_bs_dev.o 00:03:19.534 CC lib/virtio/virtio.o 00:03:19.534 CC lib/accel/accel_rpc.o 00:03:19.534 CC lib/event/app.o 00:03:19.534 CC lib/event/reactor.o 00:03:19.534 CC lib/event/log_rpc.o 00:03:19.534 CC lib/virtio/virtio_vhost_user.o 00:03:19.791 CC lib/virtio/virtio_vfio_user.o 00:03:19.791 CC lib/virtio/virtio_pci.o 00:03:19.791 CC lib/accel/accel_sw.o 00:03:19.791 CC lib/event/app_rpc.o 00:03:19.791 CC lib/event/scheduler_static.o 00:03:20.049 LIB libspdk_virtio.a 00:03:20.049 LIB libspdk_accel.a 00:03:20.049 LIB libspdk_event.a 00:03:20.049 SO libspdk_virtio.so.7.0 00:03:20.049 SO libspdk_accel.so.15.0 00:03:20.049 SO libspdk_event.so.13.1 00:03:20.307 SYMLINK libspdk_virtio.so 00:03:20.307 SYMLINK libspdk_accel.so 00:03:20.307 SYMLINK libspdk_event.so 00:03:20.307 LIB libspdk_nvme.a 00:03:20.567 CC lib/bdev/bdev.o 00:03:20.567 CC lib/bdev/bdev_rpc.o 00:03:20.567 CC lib/bdev/bdev_zone.o 00:03:20.567 CC lib/bdev/scsi_nvme.o 00:03:20.567 CC lib/bdev/part.o 00:03:20.567 SO libspdk_nvme.so.13.0 00:03:20.887 SYMLINK libspdk_nvme.so 00:03:22.815 LIB libspdk_blob.a 00:03:22.815 SO libspdk_blob.so.11.0 00:03:23.076 SYMLINK libspdk_blob.so 00:03:23.334 CC lib/blobfs/tree.o 00:03:23.334 CC lib/lvol/lvol.o 00:03:23.334 CC lib/blobfs/blobfs.o 00:03:23.901 LIB libspdk_bdev.a 00:03:23.901 SO libspdk_bdev.so.15.0 00:03:24.159 SYMLINK libspdk_bdev.so 00:03:24.417 CC lib/nvmf/ctrlr.o 00:03:24.417 CC lib/nvmf/ctrlr_discovery.o 00:03:24.417 CC lib/nvmf/ctrlr_bdev.o 00:03:24.417 CC lib/nbd/nbd.o 00:03:24.417 CC lib/nbd/nbd_rpc.o 00:03:24.417 CC lib/scsi/dev.o 00:03:24.417 CC lib/ftl/ftl_core.o 00:03:24.417 CC lib/ublk/ublk.o 00:03:24.417 LIB libspdk_blobfs.a 00:03:24.417 CC lib/ublk/ublk_rpc.o 00:03:24.417 SO libspdk_blobfs.so.10.0 00:03:24.417 LIB libspdk_lvol.a 00:03:24.676 CC lib/scsi/lun.o 00:03:24.676 SO libspdk_lvol.so.10.0 00:03:24.676 SYMLINK libspdk_blobfs.so 00:03:24.676 CC lib/scsi/port.o 00:03:24.676 SYMLINK libspdk_lvol.so 00:03:24.676 CC lib/scsi/scsi.o 00:03:24.676 CC lib/scsi/scsi_bdev.o 00:03:24.676 CC lib/scsi/scsi_pr.o 00:03:24.676 CC lib/ftl/ftl_init.o 00:03:24.933 CC lib/scsi/scsi_rpc.o 00:03:24.933 LIB libspdk_nbd.a 00:03:24.933 SO libspdk_nbd.so.7.0 00:03:24.933 CC lib/scsi/task.o 00:03:24.933 CC lib/ftl/ftl_layout.o 00:03:24.933 CC lib/nvmf/subsystem.o 00:03:24.933 CC lib/ftl/ftl_debug.o 00:03:24.933 SYMLINK libspdk_nbd.so 00:03:24.933 CC lib/nvmf/nvmf.o 00:03:25.191 CC lib/ftl/ftl_io.o 00:03:25.191 LIB libspdk_ublk.a 00:03:25.191 CC lib/nvmf/nvmf_rpc.o 00:03:25.191 CC lib/nvmf/transport.o 00:03:25.191 SO libspdk_ublk.so.3.0 00:03:25.191 CC lib/ftl/ftl_sb.o 00:03:25.191 SYMLINK libspdk_ublk.so 00:03:25.191 CC lib/ftl/ftl_l2p.o 00:03:25.191 CC lib/ftl/ftl_l2p_flat.o 00:03:25.450 LIB libspdk_scsi.a 00:03:25.450 SO libspdk_scsi.so.9.0 00:03:25.450 CC lib/ftl/ftl_nv_cache.o 00:03:25.450 CC lib/nvmf/tcp.o 00:03:25.450 CC lib/ftl/ftl_band.o 00:03:25.450 CC lib/ftl/ftl_band_ops.o 00:03:25.450 SYMLINK libspdk_scsi.so 00:03:25.451 CC lib/ftl/ftl_writer.o 00:03:26.019 CC lib/ftl/ftl_rq.o 00:03:26.019 CC lib/nvmf/stubs.o 00:03:26.019 CC lib/nvmf/mdns_server.o 00:03:26.019 CC lib/nvmf/rdma.o 00:03:26.019 CC lib/nvmf/auth.o 00:03:26.019 CC lib/ftl/ftl_reloc.o 00:03:26.277 CC lib/ftl/ftl_l2p_cache.o 00:03:26.536 CC lib/ftl/ftl_p2l.o 00:03:26.536 CC lib/ftl/mngt/ftl_mngt.o 00:03:26.536 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:26.795 CC lib/vhost/vhost.o 00:03:26.795 CC lib/iscsi/conn.o 00:03:26.795 CC lib/iscsi/init_grp.o 00:03:26.795 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:26.795 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:27.055 CC lib/iscsi/iscsi.o 00:03:27.055 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:27.055 CC lib/iscsi/md5.o 00:03:27.055 CC lib/iscsi/param.o 00:03:27.055 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:27.055 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:27.312 CC lib/iscsi/portal_grp.o 00:03:27.312 CC lib/vhost/vhost_rpc.o 00:03:27.312 CC lib/iscsi/tgt_node.o 00:03:27.312 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:27.570 CC lib/iscsi/iscsi_subsystem.o 00:03:27.570 CC lib/vhost/vhost_scsi.o 00:03:27.570 CC lib/vhost/vhost_blk.o 00:03:27.570 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:27.570 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:27.570 CC lib/vhost/rte_vhost_user.o 00:03:27.828 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:27.828 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:27.828 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:28.086 CC lib/iscsi/iscsi_rpc.o 00:03:28.086 CC lib/iscsi/task.o 00:03:28.086 CC lib/ftl/utils/ftl_conf.o 00:03:28.086 CC lib/ftl/utils/ftl_md.o 00:03:28.344 CC lib/ftl/utils/ftl_mempool.o 00:03:28.344 CC lib/ftl/utils/ftl_bitmap.o 00:03:28.344 CC lib/ftl/utils/ftl_property.o 00:03:28.344 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:28.344 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:28.603 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:28.603 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:28.603 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:28.603 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:28.603 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:28.603 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:28.603 LIB libspdk_iscsi.a 00:03:28.861 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:28.861 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:28.861 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:28.861 SO libspdk_iscsi.so.8.0 00:03:28.861 LIB libspdk_vhost.a 00:03:28.861 CC lib/ftl/base/ftl_base_dev.o 00:03:28.861 CC lib/ftl/base/ftl_base_bdev.o 00:03:28.861 CC lib/ftl/ftl_trace.o 00:03:28.861 SO libspdk_vhost.so.8.0 00:03:29.119 SYMLINK libspdk_iscsi.so 00:03:29.119 SYMLINK libspdk_vhost.so 00:03:29.119 LIB libspdk_nvmf.a 00:03:29.119 LIB libspdk_ftl.a 00:03:29.119 SO libspdk_nvmf.so.18.1 00:03:29.377 SO libspdk_ftl.so.9.0 00:03:29.635 SYMLINK libspdk_nvmf.so 00:03:29.893 SYMLINK libspdk_ftl.so 00:03:30.460 CC module/env_dpdk/env_dpdk_rpc.o 00:03:30.460 CC module/blob/bdev/blob_bdev.o 00:03:30.460 CC module/sock/posix/posix.o 00:03:30.460 CC module/accel/error/accel_error.o 00:03:30.460 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:30.460 CC module/accel/dsa/accel_dsa.o 00:03:30.460 CC module/accel/ioat/accel_ioat.o 00:03:30.460 CC module/keyring/file/keyring.o 00:03:30.460 CC module/keyring/linux/keyring.o 00:03:30.460 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:30.460 LIB libspdk_env_dpdk_rpc.a 00:03:30.460 SO libspdk_env_dpdk_rpc.so.6.0 00:03:30.460 CC module/keyring/file/keyring_rpc.o 00:03:30.460 CC module/keyring/linux/keyring_rpc.o 00:03:30.718 LIB libspdk_scheduler_dpdk_governor.a 00:03:30.718 SYMLINK libspdk_env_dpdk_rpc.so 00:03:30.718 CC module/accel/error/accel_error_rpc.o 00:03:30.718 LIB libspdk_scheduler_dynamic.a 00:03:30.718 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:30.718 CC module/accel/ioat/accel_ioat_rpc.o 00:03:30.718 SO libspdk_scheduler_dynamic.so.4.0 00:03:30.718 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:30.718 LIB libspdk_blob_bdev.a 00:03:30.718 CC module/accel/dsa/accel_dsa_rpc.o 00:03:30.718 LIB libspdk_keyring_file.a 00:03:30.718 SYMLINK libspdk_scheduler_dynamic.so 00:03:30.718 SO libspdk_blob_bdev.so.11.0 00:03:30.718 LIB libspdk_keyring_linux.a 00:03:30.718 SO libspdk_keyring_file.so.1.0 00:03:30.718 CC module/scheduler/gscheduler/gscheduler.o 00:03:30.718 SO libspdk_keyring_linux.so.1.0 00:03:30.718 LIB libspdk_accel_error.a 00:03:30.718 SYMLINK libspdk_blob_bdev.so 00:03:30.718 SYMLINK libspdk_keyring_file.so 00:03:30.718 LIB libspdk_accel_ioat.a 00:03:30.718 SO libspdk_accel_error.so.2.0 00:03:30.718 SYMLINK libspdk_keyring_linux.so 00:03:30.977 SO libspdk_accel_ioat.so.6.0 00:03:30.977 LIB libspdk_accel_dsa.a 00:03:30.977 CC module/accel/iaa/accel_iaa.o 00:03:30.977 CC module/accel/iaa/accel_iaa_rpc.o 00:03:30.977 SYMLINK libspdk_accel_error.so 00:03:30.977 SO libspdk_accel_dsa.so.5.0 00:03:30.977 SYMLINK libspdk_accel_ioat.so 00:03:30.977 LIB libspdk_scheduler_gscheduler.a 00:03:30.977 SYMLINK libspdk_accel_dsa.so 00:03:30.977 SO libspdk_scheduler_gscheduler.so.4.0 00:03:30.977 SYMLINK libspdk_scheduler_gscheduler.so 00:03:30.977 CC module/bdev/delay/vbdev_delay.o 00:03:30.977 CC module/bdev/gpt/gpt.o 00:03:30.977 CC module/bdev/error/vbdev_error.o 00:03:30.977 CC module/blobfs/bdev/blobfs_bdev.o 00:03:31.257 CC module/bdev/lvol/vbdev_lvol.o 00:03:31.257 LIB libspdk_accel_iaa.a 00:03:31.257 CC module/bdev/malloc/bdev_malloc.o 00:03:31.257 SO libspdk_accel_iaa.so.3.0 00:03:31.257 CC module/bdev/null/bdev_null.o 00:03:31.257 CC module/bdev/nvme/bdev_nvme.o 00:03:31.257 SYMLINK libspdk_accel_iaa.so 00:03:31.257 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:31.257 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:31.257 CC module/bdev/gpt/vbdev_gpt.o 00:03:31.257 LIB libspdk_sock_posix.a 00:03:31.257 SO libspdk_sock_posix.so.6.0 00:03:31.515 CC module/bdev/error/vbdev_error_rpc.o 00:03:31.515 SYMLINK libspdk_sock_posix.so 00:03:31.515 CC module/bdev/nvme/nvme_rpc.o 00:03:31.515 LIB libspdk_blobfs_bdev.a 00:03:31.515 SO libspdk_blobfs_bdev.so.6.0 00:03:31.515 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:31.515 LIB libspdk_bdev_error.a 00:03:31.515 CC module/bdev/null/bdev_null_rpc.o 00:03:31.515 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:31.515 SYMLINK libspdk_blobfs_bdev.so 00:03:31.515 CC module/bdev/nvme/bdev_mdns_client.o 00:03:31.515 SO libspdk_bdev_error.so.6.0 00:03:31.773 LIB libspdk_bdev_gpt.a 00:03:31.773 SYMLINK libspdk_bdev_error.so 00:03:31.773 SO libspdk_bdev_gpt.so.6.0 00:03:31.773 LIB libspdk_bdev_delay.a 00:03:31.773 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:31.773 CC module/bdev/nvme/vbdev_opal.o 00:03:31.773 SO libspdk_bdev_delay.so.6.0 00:03:31.773 LIB libspdk_bdev_null.a 00:03:31.773 LIB libspdk_bdev_malloc.a 00:03:31.773 SYMLINK libspdk_bdev_gpt.so 00:03:31.773 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:31.773 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:31.773 SO libspdk_bdev_null.so.6.0 00:03:31.773 SO libspdk_bdev_malloc.so.6.0 00:03:31.773 SYMLINK libspdk_bdev_delay.so 00:03:31.773 CC module/bdev/passthru/vbdev_passthru.o 00:03:31.773 SYMLINK libspdk_bdev_malloc.so 00:03:31.773 SYMLINK libspdk_bdev_null.so 00:03:32.031 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:32.031 CC module/bdev/raid/bdev_raid.o 00:03:32.031 CC module/bdev/split/vbdev_split.o 00:03:32.031 CC module/bdev/split/vbdev_split_rpc.o 00:03:32.031 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:32.031 CC module/bdev/raid/bdev_raid_rpc.o 00:03:32.289 LIB libspdk_bdev_lvol.a 00:03:32.289 CC module/bdev/raid/bdev_raid_sb.o 00:03:32.289 SO libspdk_bdev_lvol.so.6.0 00:03:32.289 CC module/bdev/xnvme/bdev_xnvme.o 00:03:32.290 LIB libspdk_bdev_passthru.a 00:03:32.290 CC module/bdev/raid/raid0.o 00:03:32.290 SO libspdk_bdev_passthru.so.6.0 00:03:32.290 SYMLINK libspdk_bdev_lvol.so 00:03:32.290 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:32.290 LIB libspdk_bdev_split.a 00:03:32.290 SYMLINK libspdk_bdev_passthru.so 00:03:32.290 SO libspdk_bdev_split.so.6.0 00:03:32.290 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:32.548 SYMLINK libspdk_bdev_split.so 00:03:32.548 CC module/bdev/raid/raid1.o 00:03:32.548 CC module/bdev/raid/concat.o 00:03:32.548 CC module/bdev/aio/bdev_aio.o 00:03:32.548 LIB libspdk_bdev_xnvme.a 00:03:32.548 LIB libspdk_bdev_zone_block.a 00:03:32.548 SO libspdk_bdev_xnvme.so.3.0 00:03:32.548 SO libspdk_bdev_zone_block.so.6.0 00:03:32.548 CC module/bdev/ftl/bdev_ftl.o 00:03:32.548 SYMLINK libspdk_bdev_xnvme.so 00:03:32.548 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:32.814 SYMLINK libspdk_bdev_zone_block.so 00:03:32.814 CC module/bdev/aio/bdev_aio_rpc.o 00:03:32.814 CC module/bdev/iscsi/bdev_iscsi.o 00:03:32.814 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:32.814 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:32.814 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:32.814 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:32.814 LIB libspdk_bdev_aio.a 00:03:33.074 SO libspdk_bdev_aio.so.6.0 00:03:33.074 LIB libspdk_bdev_ftl.a 00:03:33.074 SO libspdk_bdev_ftl.so.6.0 00:03:33.074 SYMLINK libspdk_bdev_aio.so 00:03:33.074 SYMLINK libspdk_bdev_ftl.so 00:03:33.074 LIB libspdk_bdev_iscsi.a 00:03:33.074 SO libspdk_bdev_iscsi.so.6.0 00:03:33.333 SYMLINK libspdk_bdev_iscsi.so 00:03:33.333 LIB libspdk_bdev_raid.a 00:03:33.333 LIB libspdk_bdev_virtio.a 00:03:33.591 SO libspdk_bdev_virtio.so.6.0 00:03:33.591 SO libspdk_bdev_raid.so.6.0 00:03:33.591 SYMLINK libspdk_bdev_virtio.so 00:03:33.591 SYMLINK libspdk_bdev_raid.so 00:03:34.159 LIB libspdk_bdev_nvme.a 00:03:34.159 SO libspdk_bdev_nvme.so.7.0 00:03:34.418 SYMLINK libspdk_bdev_nvme.so 00:03:34.984 CC module/event/subsystems/keyring/keyring.o 00:03:34.984 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:34.984 CC module/event/subsystems/scheduler/scheduler.o 00:03:34.984 CC module/event/subsystems/vmd/vmd.o 00:03:34.984 CC module/event/subsystems/sock/sock.o 00:03:34.984 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:34.984 CC module/event/subsystems/iobuf/iobuf.o 00:03:34.984 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:34.984 LIB libspdk_event_scheduler.a 00:03:34.984 LIB libspdk_event_keyring.a 00:03:34.984 LIB libspdk_event_vhost_blk.a 00:03:34.984 LIB libspdk_event_iobuf.a 00:03:34.984 LIB libspdk_event_sock.a 00:03:34.984 LIB libspdk_event_vmd.a 00:03:34.984 SO libspdk_event_scheduler.so.4.0 00:03:34.984 SO libspdk_event_keyring.so.1.0 00:03:34.984 SO libspdk_event_vhost_blk.so.3.0 00:03:34.984 SO libspdk_event_iobuf.so.3.0 00:03:34.984 SO libspdk_event_sock.so.5.0 00:03:34.984 SO libspdk_event_vmd.so.6.0 00:03:35.243 SYMLINK libspdk_event_vhost_blk.so 00:03:35.243 SYMLINK libspdk_event_scheduler.so 00:03:35.243 SYMLINK libspdk_event_keyring.so 00:03:35.243 SYMLINK libspdk_event_sock.so 00:03:35.243 SYMLINK libspdk_event_vmd.so 00:03:35.243 SYMLINK libspdk_event_iobuf.so 00:03:35.502 CC module/event/subsystems/accel/accel.o 00:03:35.502 LIB libspdk_event_accel.a 00:03:35.502 SO libspdk_event_accel.so.6.0 00:03:35.761 SYMLINK libspdk_event_accel.so 00:03:36.020 CC module/event/subsystems/bdev/bdev.o 00:03:36.279 LIB libspdk_event_bdev.a 00:03:36.279 SO libspdk_event_bdev.so.6.0 00:03:36.279 SYMLINK libspdk_event_bdev.so 00:03:36.538 CC module/event/subsystems/nbd/nbd.o 00:03:36.538 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:36.538 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:36.538 CC module/event/subsystems/scsi/scsi.o 00:03:36.538 CC module/event/subsystems/ublk/ublk.o 00:03:36.538 LIB libspdk_event_ublk.a 00:03:36.538 LIB libspdk_event_nbd.a 00:03:36.797 SO libspdk_event_ublk.so.3.0 00:03:36.797 SO libspdk_event_nbd.so.6.0 00:03:36.797 LIB libspdk_event_scsi.a 00:03:36.797 SO libspdk_event_scsi.so.6.0 00:03:36.797 SYMLINK libspdk_event_nbd.so 00:03:36.797 SYMLINK libspdk_event_ublk.so 00:03:36.797 LIB libspdk_event_nvmf.a 00:03:36.797 SO libspdk_event_nvmf.so.6.0 00:03:36.797 SYMLINK libspdk_event_scsi.so 00:03:36.797 SYMLINK libspdk_event_nvmf.so 00:03:37.056 CC module/event/subsystems/iscsi/iscsi.o 00:03:37.056 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:37.314 LIB libspdk_event_vhost_scsi.a 00:03:37.314 LIB libspdk_event_iscsi.a 00:03:37.314 SO libspdk_event_vhost_scsi.so.3.0 00:03:37.314 SO libspdk_event_iscsi.so.6.0 00:03:37.314 SYMLINK libspdk_event_vhost_scsi.so 00:03:37.314 SYMLINK libspdk_event_iscsi.so 00:03:37.573 SO libspdk.so.6.0 00:03:37.573 SYMLINK libspdk.so 00:03:37.831 CC app/spdk_lspci/spdk_lspci.o 00:03:37.831 CC app/trace_record/trace_record.o 00:03:37.831 CXX app/trace/trace.o 00:03:37.831 CC app/iscsi_tgt/iscsi_tgt.o 00:03:37.831 CC app/nvmf_tgt/nvmf_main.o 00:03:37.831 CC app/spdk_tgt/spdk_tgt.o 00:03:37.831 CC examples/accel/perf/accel_perf.o 00:03:37.831 CC examples/blob/hello_world/hello_blob.o 00:03:37.831 CC examples/bdev/hello_world/hello_bdev.o 00:03:37.831 CC test/accel/dif/dif.o 00:03:37.831 LINK spdk_lspci 00:03:38.090 LINK nvmf_tgt 00:03:38.090 LINK iscsi_tgt 00:03:38.090 LINK spdk_trace_record 00:03:38.090 LINK spdk_tgt 00:03:38.348 LINK hello_blob 00:03:38.348 LINK hello_bdev 00:03:38.348 LINK spdk_trace 00:03:38.348 CC app/spdk_nvme_perf/perf.o 00:03:38.607 CC examples/ioat/perf/perf.o 00:03:38.607 CC examples/nvme/hello_world/hello_world.o 00:03:38.607 LINK dif 00:03:38.607 LINK accel_perf 00:03:38.607 CC examples/sock/hello_world/hello_sock.o 00:03:38.607 CC examples/blob/cli/blobcli.o 00:03:38.607 CC examples/bdev/bdevperf/bdevperf.o 00:03:38.607 CC examples/vmd/lsvmd/lsvmd.o 00:03:38.607 CC examples/nvmf/nvmf/nvmf.o 00:03:38.897 LINK ioat_perf 00:03:38.897 LINK hello_world 00:03:38.897 LINK lsvmd 00:03:38.897 CC examples/ioat/verify/verify.o 00:03:38.897 LINK hello_sock 00:03:38.897 CC app/spdk_nvme_identify/identify.o 00:03:38.897 CC test/app/bdev_svc/bdev_svc.o 00:03:39.156 LINK nvmf 00:03:39.156 CC examples/nvme/reconnect/reconnect.o 00:03:39.156 CC examples/vmd/led/led.o 00:03:39.156 LINK verify 00:03:39.156 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:39.156 LINK blobcli 00:03:39.156 LINK bdev_svc 00:03:39.156 LINK led 00:03:39.415 CC examples/nvme/arbitration/arbitration.o 00:03:39.415 CC examples/nvme/hotplug/hotplug.o 00:03:39.415 LINK spdk_nvme_perf 00:03:39.415 LINK reconnect 00:03:39.415 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:39.415 CC examples/nvme/abort/abort.o 00:03:39.673 LINK bdevperf 00:03:39.673 LINK hotplug 00:03:39.673 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:39.673 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:39.673 LINK cmb_copy 00:03:39.673 LINK arbitration 00:03:39.673 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:39.932 LINK nvme_manage 00:03:39.932 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:39.932 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:39.932 LINK abort 00:03:39.932 CC app/spdk_nvme_discover/discovery_aer.o 00:03:39.932 CC app/spdk_top/spdk_top.o 00:03:39.932 CC app/vhost/vhost.o 00:03:40.191 LINK spdk_nvme_identify 00:03:40.191 LINK pmr_persistence 00:03:40.191 LINK nvme_fuzz 00:03:40.191 CC examples/util/zipf/zipf.o 00:03:40.191 LINK spdk_nvme_discover 00:03:40.191 LINK vhost 00:03:40.449 LINK vhost_fuzz 00:03:40.449 CC examples/thread/thread/thread_ex.o 00:03:40.449 LINK zipf 00:03:40.449 CC test/app/histogram_perf/histogram_perf.o 00:03:40.449 CC test/app/jsoncat/jsoncat.o 00:03:40.449 CC examples/idxd/perf/perf.o 00:03:40.449 CC test/app/stub/stub.o 00:03:40.708 LINK histogram_perf 00:03:40.708 LINK jsoncat 00:03:40.708 CC app/spdk_dd/spdk_dd.o 00:03:40.708 LINK thread 00:03:40.708 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:40.708 CC app/fio/nvme/fio_plugin.o 00:03:40.708 LINK stub 00:03:40.972 CC app/fio/bdev/fio_plugin.o 00:03:40.972 LINK idxd_perf 00:03:40.972 LINK interrupt_tgt 00:03:40.972 CC test/bdev/bdevio/bdevio.o 00:03:40.972 TEST_HEADER include/spdk/accel.h 00:03:40.972 TEST_HEADER include/spdk/accel_module.h 00:03:40.972 TEST_HEADER include/spdk/assert.h 00:03:40.972 TEST_HEADER include/spdk/barrier.h 00:03:40.972 TEST_HEADER include/spdk/base64.h 00:03:40.972 TEST_HEADER include/spdk/bdev.h 00:03:40.972 TEST_HEADER include/spdk/bdev_module.h 00:03:40.972 TEST_HEADER include/spdk/bdev_zone.h 00:03:40.972 TEST_HEADER include/spdk/bit_array.h 00:03:40.972 TEST_HEADER include/spdk/bit_pool.h 00:03:40.972 TEST_HEADER include/spdk/blob_bdev.h 00:03:40.972 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:40.972 TEST_HEADER include/spdk/blobfs.h 00:03:40.972 TEST_HEADER include/spdk/blob.h 00:03:40.972 TEST_HEADER include/spdk/conf.h 00:03:40.972 TEST_HEADER include/spdk/config.h 00:03:40.972 TEST_HEADER include/spdk/cpuset.h 00:03:40.972 TEST_HEADER include/spdk/crc16.h 00:03:40.972 TEST_HEADER include/spdk/crc32.h 00:03:40.972 TEST_HEADER include/spdk/crc64.h 00:03:40.972 TEST_HEADER include/spdk/dif.h 00:03:40.972 TEST_HEADER include/spdk/dma.h 00:03:40.972 TEST_HEADER include/spdk/endian.h 00:03:40.972 TEST_HEADER include/spdk/env_dpdk.h 00:03:40.972 LINK spdk_dd 00:03:40.972 TEST_HEADER include/spdk/env.h 00:03:40.972 TEST_HEADER include/spdk/event.h 00:03:40.972 TEST_HEADER include/spdk/fd_group.h 00:03:40.972 TEST_HEADER include/spdk/fd.h 00:03:40.972 TEST_HEADER include/spdk/file.h 00:03:40.972 TEST_HEADER include/spdk/ftl.h 00:03:40.972 TEST_HEADER include/spdk/gpt_spec.h 00:03:40.972 TEST_HEADER include/spdk/hexlify.h 00:03:40.972 TEST_HEADER include/spdk/histogram_data.h 00:03:40.972 TEST_HEADER include/spdk/idxd.h 00:03:40.972 TEST_HEADER include/spdk/idxd_spec.h 00:03:40.972 TEST_HEADER include/spdk/init.h 00:03:40.972 CC test/blobfs/mkfs/mkfs.o 00:03:40.972 TEST_HEADER include/spdk/ioat.h 00:03:40.972 TEST_HEADER include/spdk/ioat_spec.h 00:03:40.972 TEST_HEADER include/spdk/iscsi_spec.h 00:03:40.972 TEST_HEADER include/spdk/json.h 00:03:40.972 TEST_HEADER include/spdk/jsonrpc.h 00:03:40.972 TEST_HEADER include/spdk/keyring.h 00:03:40.972 TEST_HEADER include/spdk/keyring_module.h 00:03:40.972 TEST_HEADER include/spdk/likely.h 00:03:40.972 TEST_HEADER include/spdk/log.h 00:03:40.972 TEST_HEADER include/spdk/lvol.h 00:03:40.972 TEST_HEADER include/spdk/memory.h 00:03:40.972 TEST_HEADER include/spdk/mmio.h 00:03:40.972 TEST_HEADER include/spdk/nbd.h 00:03:40.972 TEST_HEADER include/spdk/notify.h 00:03:40.972 TEST_HEADER include/spdk/nvme.h 00:03:40.972 TEST_HEADER include/spdk/nvme_intel.h 00:03:40.972 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:40.972 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:41.233 TEST_HEADER include/spdk/nvme_spec.h 00:03:41.233 TEST_HEADER include/spdk/nvme_zns.h 00:03:41.233 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:41.233 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:41.233 TEST_HEADER include/spdk/nvmf.h 00:03:41.233 TEST_HEADER include/spdk/nvmf_spec.h 00:03:41.233 TEST_HEADER include/spdk/nvmf_transport.h 00:03:41.233 TEST_HEADER include/spdk/opal.h 00:03:41.233 TEST_HEADER include/spdk/opal_spec.h 00:03:41.233 TEST_HEADER include/spdk/pci_ids.h 00:03:41.233 TEST_HEADER include/spdk/pipe.h 00:03:41.233 TEST_HEADER include/spdk/queue.h 00:03:41.233 TEST_HEADER include/spdk/reduce.h 00:03:41.233 TEST_HEADER include/spdk/rpc.h 00:03:41.233 TEST_HEADER include/spdk/scheduler.h 00:03:41.233 TEST_HEADER include/spdk/scsi.h 00:03:41.233 TEST_HEADER include/spdk/scsi_spec.h 00:03:41.233 TEST_HEADER include/spdk/sock.h 00:03:41.233 LINK spdk_top 00:03:41.233 TEST_HEADER include/spdk/stdinc.h 00:03:41.233 TEST_HEADER include/spdk/string.h 00:03:41.233 TEST_HEADER include/spdk/thread.h 00:03:41.233 TEST_HEADER include/spdk/trace.h 00:03:41.233 TEST_HEADER include/spdk/trace_parser.h 00:03:41.233 TEST_HEADER include/spdk/tree.h 00:03:41.233 TEST_HEADER include/spdk/ublk.h 00:03:41.233 TEST_HEADER include/spdk/util.h 00:03:41.233 TEST_HEADER include/spdk/uuid.h 00:03:41.233 TEST_HEADER include/spdk/version.h 00:03:41.233 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:41.233 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:41.233 TEST_HEADER include/spdk/vhost.h 00:03:41.233 TEST_HEADER include/spdk/vmd.h 00:03:41.233 TEST_HEADER include/spdk/xor.h 00:03:41.233 TEST_HEADER include/spdk/zipf.h 00:03:41.233 CXX test/cpp_headers/accel.o 00:03:41.233 CC test/dma/test_dma/test_dma.o 00:03:41.233 LINK mkfs 00:03:41.492 CC test/env/vtophys/vtophys.o 00:03:41.492 CXX test/cpp_headers/accel_module.o 00:03:41.492 LINK spdk_nvme 00:03:41.492 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:41.492 CC test/env/mem_callbacks/mem_callbacks.o 00:03:41.492 LINK spdk_bdev 00:03:41.492 LINK vtophys 00:03:41.492 CXX test/cpp_headers/assert.o 00:03:41.492 LINK bdevio 00:03:41.492 LINK env_dpdk_post_init 00:03:41.750 LINK test_dma 00:03:41.750 CC test/env/memory/memory_ut.o 00:03:41.750 CC test/env/pci/pci_ut.o 00:03:41.750 CXX test/cpp_headers/barrier.o 00:03:41.750 CC test/event/event_perf/event_perf.o 00:03:41.750 CXX test/cpp_headers/base64.o 00:03:41.750 CXX test/cpp_headers/bdev.o 00:03:41.750 LINK iscsi_fuzz 00:03:42.009 LINK event_perf 00:03:42.009 CXX test/cpp_headers/bdev_module.o 00:03:42.009 CC test/lvol/esnap/esnap.o 00:03:42.009 CXX test/cpp_headers/bdev_zone.o 00:03:42.009 CC test/event/reactor/reactor.o 00:03:42.009 CC test/nvme/aer/aer.o 00:03:42.009 LINK mem_callbacks 00:03:42.267 LINK reactor 00:03:42.267 CC test/event/reactor_perf/reactor_perf.o 00:03:42.267 CXX test/cpp_headers/bit_array.o 00:03:42.267 CC test/event/app_repeat/app_repeat.o 00:03:42.267 LINK pci_ut 00:03:42.267 CC test/rpc_client/rpc_client_test.o 00:03:42.267 CC test/event/scheduler/scheduler.o 00:03:42.267 LINK reactor_perf 00:03:42.267 CXX test/cpp_headers/bit_pool.o 00:03:42.267 LINK app_repeat 00:03:42.267 LINK aer 00:03:42.526 CC test/nvme/reset/reset.o 00:03:42.526 LINK rpc_client_test 00:03:42.526 CXX test/cpp_headers/blob_bdev.o 00:03:42.526 CXX test/cpp_headers/blobfs_bdev.o 00:03:42.526 LINK scheduler 00:03:42.526 CXX test/cpp_headers/blobfs.o 00:03:42.526 CC test/nvme/sgl/sgl.o 00:03:42.526 CXX test/cpp_headers/blob.o 00:03:42.784 CC test/thread/poller_perf/poller_perf.o 00:03:42.784 LINK reset 00:03:42.784 CXX test/cpp_headers/conf.o 00:03:42.784 CXX test/cpp_headers/config.o 00:03:42.784 CXX test/cpp_headers/cpuset.o 00:03:42.784 CC test/nvme/e2edp/nvme_dp.o 00:03:42.784 LINK poller_perf 00:03:42.784 CC test/nvme/overhead/overhead.o 00:03:42.784 CC test/nvme/err_injection/err_injection.o 00:03:43.043 LINK sgl 00:03:43.043 LINK memory_ut 00:03:43.043 CC test/nvme/startup/startup.o 00:03:43.043 CXX test/cpp_headers/crc16.o 00:03:43.043 CC test/nvme/reserve/reserve.o 00:03:43.043 CC test/nvme/simple_copy/simple_copy.o 00:03:43.043 LINK err_injection 00:03:43.043 LINK nvme_dp 00:03:43.301 CXX test/cpp_headers/crc32.o 00:03:43.301 CC test/nvme/connect_stress/connect_stress.o 00:03:43.301 LINK startup 00:03:43.301 LINK overhead 00:03:43.301 CC test/nvme/boot_partition/boot_partition.o 00:03:43.301 LINK reserve 00:03:43.301 LINK simple_copy 00:03:43.301 CC test/nvme/compliance/nvme_compliance.o 00:03:43.301 CXX test/cpp_headers/crc64.o 00:03:43.301 CXX test/cpp_headers/dif.o 00:03:43.301 LINK connect_stress 00:03:43.301 CC test/nvme/fused_ordering/fused_ordering.o 00:03:43.301 LINK boot_partition 00:03:43.560 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:43.560 CC test/nvme/fdp/fdp.o 00:03:43.560 CXX test/cpp_headers/dma.o 00:03:43.560 CXX test/cpp_headers/endian.o 00:03:43.560 CXX test/cpp_headers/env_dpdk.o 00:03:43.560 CXX test/cpp_headers/env.o 00:03:43.560 CC test/nvme/cuse/cuse.o 00:03:43.560 LINK fused_ordering 00:03:43.818 LINK doorbell_aers 00:03:43.818 CXX test/cpp_headers/event.o 00:03:43.818 CXX test/cpp_headers/fd_group.o 00:03:43.818 CXX test/cpp_headers/fd.o 00:03:43.818 LINK nvme_compliance 00:03:43.818 CXX test/cpp_headers/file.o 00:03:43.819 CXX test/cpp_headers/ftl.o 00:03:43.819 CXX test/cpp_headers/gpt_spec.o 00:03:43.819 CXX test/cpp_headers/hexlify.o 00:03:44.079 CXX test/cpp_headers/histogram_data.o 00:03:44.079 LINK fdp 00:03:44.079 CXX test/cpp_headers/idxd.o 00:03:44.079 CXX test/cpp_headers/idxd_spec.o 00:03:44.079 CXX test/cpp_headers/init.o 00:03:44.079 CXX test/cpp_headers/ioat.o 00:03:44.079 CXX test/cpp_headers/ioat_spec.o 00:03:44.079 CXX test/cpp_headers/iscsi_spec.o 00:03:44.079 CXX test/cpp_headers/json.o 00:03:44.079 CXX test/cpp_headers/jsonrpc.o 00:03:44.079 CXX test/cpp_headers/keyring.o 00:03:44.079 CXX test/cpp_headers/keyring_module.o 00:03:44.079 CXX test/cpp_headers/likely.o 00:03:44.337 CXX test/cpp_headers/log.o 00:03:44.337 CXX test/cpp_headers/lvol.o 00:03:44.337 CXX test/cpp_headers/memory.o 00:03:44.337 CXX test/cpp_headers/mmio.o 00:03:44.337 CXX test/cpp_headers/nbd.o 00:03:44.337 CXX test/cpp_headers/notify.o 00:03:44.337 CXX test/cpp_headers/nvme.o 00:03:44.337 CXX test/cpp_headers/nvme_intel.o 00:03:44.337 CXX test/cpp_headers/nvme_ocssd.o 00:03:44.337 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:44.337 CXX test/cpp_headers/nvme_spec.o 00:03:44.337 CXX test/cpp_headers/nvme_zns.o 00:03:44.595 CXX test/cpp_headers/nvmf_cmd.o 00:03:44.595 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:44.595 CXX test/cpp_headers/nvmf.o 00:03:44.595 CXX test/cpp_headers/nvmf_spec.o 00:03:44.595 CXX test/cpp_headers/nvmf_transport.o 00:03:44.595 CXX test/cpp_headers/opal.o 00:03:44.595 CXX test/cpp_headers/opal_spec.o 00:03:44.595 CXX test/cpp_headers/pci_ids.o 00:03:44.595 CXX test/cpp_headers/pipe.o 00:03:44.595 CXX test/cpp_headers/queue.o 00:03:44.595 CXX test/cpp_headers/reduce.o 00:03:44.595 CXX test/cpp_headers/rpc.o 00:03:44.854 CXX test/cpp_headers/scheduler.o 00:03:44.854 CXX test/cpp_headers/scsi.o 00:03:44.854 CXX test/cpp_headers/scsi_spec.o 00:03:44.854 CXX test/cpp_headers/sock.o 00:03:44.854 CXX test/cpp_headers/stdinc.o 00:03:44.854 CXX test/cpp_headers/string.o 00:03:44.854 CXX test/cpp_headers/thread.o 00:03:44.854 CXX test/cpp_headers/trace.o 00:03:44.854 CXX test/cpp_headers/trace_parser.o 00:03:44.854 CXX test/cpp_headers/tree.o 00:03:45.112 CXX test/cpp_headers/ublk.o 00:03:45.112 CXX test/cpp_headers/util.o 00:03:45.112 CXX test/cpp_headers/uuid.o 00:03:45.112 CXX test/cpp_headers/version.o 00:03:45.112 CXX test/cpp_headers/vfio_user_pci.o 00:03:45.112 CXX test/cpp_headers/vfio_user_spec.o 00:03:45.112 CXX test/cpp_headers/vhost.o 00:03:45.112 CXX test/cpp_headers/vmd.o 00:03:45.112 CXX test/cpp_headers/xor.o 00:03:45.113 CXX test/cpp_headers/zipf.o 00:03:45.113 LINK cuse 00:03:49.345 LINK esnap 00:03:49.345 ************************************ 00:03:49.345 END TEST make 00:03:49.345 ************************************ 00:03:49.345 00:03:49.345 real 1m23.864s 00:03:49.345 user 8m28.916s 00:03:49.345 sys 1m40.626s 00:03:49.345 09:52:38 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:03:49.345 09:52:38 make -- common/autotest_common.sh@10 -- $ set +x 00:03:49.604 09:52:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:49.604 09:52:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:49.604 09:52:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:49.604 09:52:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.604 09:52:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:49.604 09:52:38 -- pm/common@44 -- $ pid=5238 00:03:49.604 09:52:38 -- pm/common@50 -- $ kill -TERM 5238 00:03:49.604 09:52:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.604 09:52:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:49.604 09:52:38 -- pm/common@44 -- $ pid=5239 00:03:49.604 09:52:38 -- pm/common@50 -- $ kill -TERM 5239 00:03:49.604 09:52:38 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:49.604 09:52:38 -- nvmf/common.sh@7 -- # uname -s 00:03:49.604 09:52:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:49.604 09:52:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:49.604 09:52:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:49.604 09:52:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:49.604 09:52:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:49.604 09:52:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:49.604 09:52:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:49.604 09:52:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:49.604 09:52:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:49.604 09:52:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:49.604 09:52:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97c1d2c7-f3c7-4dc5-9a74-d2f35dc4a034 00:03:49.604 09:52:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=97c1d2c7-f3c7-4dc5-9a74-d2f35dc4a034 00:03:49.604 09:52:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:49.604 09:52:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:49.604 09:52:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:49.604 09:52:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:49.604 09:52:38 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:49.604 09:52:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:49.604 09:52:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:49.604 09:52:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:49.604 09:52:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.604 09:52:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.604 09:52:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.604 09:52:38 -- paths/export.sh@5 -- # export PATH 00:03:49.604 09:52:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.604 09:52:38 -- nvmf/common.sh@47 -- # : 0 00:03:49.604 09:52:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:49.604 09:52:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:49.604 09:52:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:49.604 09:52:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:49.604 09:52:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:49.604 09:52:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:49.604 09:52:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:49.604 09:52:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:49.604 09:52:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:49.604 09:52:39 -- spdk/autotest.sh@32 -- # uname -s 00:03:49.604 09:52:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:49.604 09:52:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:49.604 09:52:39 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:49.604 09:52:39 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:49.604 09:52:39 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:49.604 09:52:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:49.604 09:52:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:49.604 09:52:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:49.604 09:52:39 -- spdk/autotest.sh@48 -- # udevadm_pid=53771 00:03:49.604 09:52:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:49.604 09:52:39 -- pm/common@17 -- # local monitor 00:03:49.604 09:52:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.604 09:52:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:49.604 09:52:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.604 09:52:39 -- pm/common@25 -- # sleep 1 00:03:49.604 09:52:39 -- pm/common@21 -- # date +%s 00:03:49.604 09:52:39 -- pm/common@21 -- # date +%s 00:03:49.604 09:52:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1718013159 00:03:49.604 09:52:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1718013159 00:03:49.604 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1718013159_collect-vmstat.pm.log 00:03:49.604 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1718013159_collect-cpu-load.pm.log 00:03:50.979 09:52:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:50.979 09:52:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:50.979 09:52:40 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:50.979 09:52:40 -- common/autotest_common.sh@10 -- # set +x 00:03:50.979 09:52:40 -- spdk/autotest.sh@59 -- # create_test_list 00:03:50.979 09:52:40 -- common/autotest_common.sh@747 -- # xtrace_disable 00:03:50.979 09:52:40 -- common/autotest_common.sh@10 -- # set +x 00:03:50.979 09:52:40 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:50.979 09:52:40 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:50.979 09:52:40 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:50.979 09:52:40 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:50.979 09:52:40 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:50.979 09:52:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:50.979 09:52:40 -- common/autotest_common.sh@1454 -- # uname 00:03:50.979 09:52:40 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:03:50.979 09:52:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:50.979 09:52:40 -- common/autotest_common.sh@1474 -- # uname 00:03:50.979 09:52:40 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:03:50.979 09:52:40 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:50.979 09:52:40 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:50.979 09:52:40 -- spdk/autotest.sh@72 -- # hash lcov 00:03:50.979 09:52:40 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:50.979 09:52:40 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:50.979 --rc lcov_branch_coverage=1 00:03:50.979 --rc lcov_function_coverage=1 00:03:50.979 --rc genhtml_branch_coverage=1 00:03:50.979 --rc genhtml_function_coverage=1 00:03:50.979 --rc genhtml_legend=1 00:03:50.979 --rc geninfo_all_blocks=1 00:03:50.979 ' 00:03:50.979 09:52:40 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:50.979 --rc lcov_branch_coverage=1 00:03:50.979 --rc lcov_function_coverage=1 00:03:50.979 --rc genhtml_branch_coverage=1 00:03:50.979 --rc genhtml_function_coverage=1 00:03:50.979 --rc genhtml_legend=1 00:03:50.979 --rc geninfo_all_blocks=1 00:03:50.979 ' 00:03:50.979 09:52:40 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:50.979 --rc lcov_branch_coverage=1 00:03:50.979 --rc lcov_function_coverage=1 00:03:50.979 --rc genhtml_branch_coverage=1 00:03:50.979 --rc genhtml_function_coverage=1 00:03:50.979 --rc genhtml_legend=1 00:03:50.979 --rc geninfo_all_blocks=1 00:03:50.979 --no-external' 00:03:50.979 09:52:40 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:50.979 --rc lcov_branch_coverage=1 00:03:50.979 --rc lcov_function_coverage=1 00:03:50.979 --rc genhtml_branch_coverage=1 00:03:50.979 --rc genhtml_function_coverage=1 00:03:50.979 --rc genhtml_legend=1 00:03:50.979 --rc geninfo_all_blocks=1 00:03:50.979 --no-external' 00:03:50.979 09:52:40 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:50.979 lcov: LCOV version 1.14 00:03:50.979 09:52:40 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:09.069 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:09.069 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:19.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:19.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:19.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:19.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:19.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:19.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:19.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:19.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:19.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:19.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:19.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:19.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:19.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:19.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:19.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:19.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:19.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:19.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:22.360 09:53:11 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:22.360 09:53:11 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:22.360 09:53:11 -- common/autotest_common.sh@10 -- # set +x 00:04:22.360 09:53:11 -- spdk/autotest.sh@91 -- # rm -f 00:04:22.360 09:53:11 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:22.629 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.195 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:23.195 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:23.195 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:23.195 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:23.195 09:53:12 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:23.195 09:53:12 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:23.195 09:53:12 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:23.195 09:53:12 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:23.195 09:53:12 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:23.195 09:53:12 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:23.195 09:53:12 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:23.195 09:53:12 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.195 09:53:12 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:23.195 09:53:12 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:23.195 09:53:12 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n1 00:04:23.195 09:53:12 -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:04:23.195 09:53:12 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:23.195 09:53:12 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:23.195 09:53:12 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:23.195 09:53:12 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n1 00:04:23.195 09:53:12 -- common/autotest_common.sh@1661 -- # local device=nvme2n1 00:04:23.195 09:53:12 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:23.195 09:53:12 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:23.195 09:53:12 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:23.195 09:53:12 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n2 00:04:23.195 09:53:12 -- common/autotest_common.sh@1661 -- # local device=nvme2n2 00:04:23.195 09:53:12 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:23.195 09:53:12 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:23.195 09:53:12 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:23.195 09:53:12 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n3 00:04:23.195 09:53:12 -- common/autotest_common.sh@1661 -- # local device=nvme2n3 00:04:23.195 09:53:12 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:23.195 09:53:12 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:23.195 09:53:12 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:23.195 09:53:12 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme3c3n1 00:04:23.195 09:53:12 -- common/autotest_common.sh@1661 -- # local device=nvme3c3n1 00:04:23.195 09:53:12 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:23.195 09:53:12 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:23.195 09:53:12 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:23.195 09:53:12 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme3n1 00:04:23.195 09:53:12 -- common/autotest_common.sh@1661 -- # local device=nvme3n1 00:04:23.195 09:53:12 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:23.195 09:53:12 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:23.195 09:53:12 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:23.195 09:53:12 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.195 09:53:12 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:23.195 09:53:12 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:23.195 09:53:12 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:23.195 09:53:12 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:23.195 No valid GPT data, bailing 00:04:23.195 09:53:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:23.454 09:53:12 -- scripts/common.sh@391 -- # pt= 00:04:23.454 09:53:12 -- scripts/common.sh@392 -- # return 1 00:04:23.454 09:53:12 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:23.454 1+0 records in 00:04:23.454 1+0 records out 00:04:23.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115882 s, 90.5 MB/s 00:04:23.454 09:53:12 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.454 09:53:12 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:23.454 09:53:12 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:23.454 09:53:12 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:23.454 09:53:12 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:23.454 No valid GPT data, bailing 00:04:23.454 09:53:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:23.454 09:53:12 -- scripts/common.sh@391 -- # pt= 00:04:23.454 09:53:12 -- scripts/common.sh@392 -- # return 1 00:04:23.454 09:53:12 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:23.454 1+0 records in 00:04:23.454 1+0 records out 00:04:23.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415896 s, 252 MB/s 00:04:23.454 09:53:12 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.454 09:53:12 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:23.454 09:53:12 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:04:23.454 09:53:12 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:04:23.454 09:53:12 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:23.454 No valid GPT data, bailing 00:04:23.454 09:53:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:23.454 09:53:12 -- scripts/common.sh@391 -- # pt= 00:04:23.454 09:53:12 -- scripts/common.sh@392 -- # return 1 00:04:23.454 09:53:12 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:23.454 1+0 records in 00:04:23.454 1+0 records out 00:04:23.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00402047 s, 261 MB/s 00:04:23.454 09:53:12 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.454 09:53:12 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:23.454 09:53:12 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:04:23.454 09:53:12 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:04:23.454 09:53:12 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:23.454 No valid GPT data, bailing 00:04:23.714 09:53:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:23.714 09:53:12 -- scripts/common.sh@391 -- # pt= 00:04:23.714 09:53:12 -- scripts/common.sh@392 -- # return 1 00:04:23.714 09:53:12 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:23.714 1+0 records in 00:04:23.714 1+0 records out 00:04:23.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00424325 s, 247 MB/s 00:04:23.714 09:53:12 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.714 09:53:12 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:23.714 09:53:12 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:04:23.714 09:53:12 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:04:23.714 09:53:12 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:23.714 No valid GPT data, bailing 00:04:23.714 09:53:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:23.714 09:53:13 -- scripts/common.sh@391 -- # pt= 00:04:23.714 09:53:13 -- scripts/common.sh@392 -- # return 1 00:04:23.714 09:53:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:23.714 1+0 records in 00:04:23.714 1+0 records out 00:04:23.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0038898 s, 270 MB/s 00:04:23.714 09:53:13 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.714 09:53:13 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:23.714 09:53:13 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:04:23.714 09:53:13 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:04:23.714 09:53:13 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:23.714 No valid GPT data, bailing 00:04:23.714 09:53:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:23.714 09:53:13 -- scripts/common.sh@391 -- # pt= 00:04:23.714 09:53:13 -- scripts/common.sh@392 -- # return 1 00:04:23.714 09:53:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:23.714 1+0 records in 00:04:23.714 1+0 records out 00:04:23.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00452634 s, 232 MB/s 00:04:23.714 09:53:13 -- spdk/autotest.sh@118 -- # sync 00:04:23.714 09:53:13 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:23.714 09:53:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:23.714 09:53:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:25.617 09:53:14 -- spdk/autotest.sh@124 -- # uname -s 00:04:25.617 09:53:14 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:25.617 09:53:14 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:25.617 09:53:14 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:25.617 09:53:14 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:25.617 09:53:14 -- common/autotest_common.sh@10 -- # set +x 00:04:25.617 ************************************ 00:04:25.617 START TEST setup.sh 00:04:25.617 ************************************ 00:04:25.617 09:53:14 setup.sh -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:25.617 * Looking for test storage... 00:04:25.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:25.617 09:53:15 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:25.617 09:53:15 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:25.617 09:53:15 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:25.617 09:53:15 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:25.617 09:53:15 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:25.617 09:53:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.617 ************************************ 00:04:25.617 START TEST acl 00:04:25.617 ************************************ 00:04:25.617 09:53:15 setup.sh.acl -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:25.876 * Looking for test storage... 00:04:25.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:25.876 09:53:15 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n1 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n1 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme2n1 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n2 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme2n2 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n3 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme2n3 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme3c3n1 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme3c3n1 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme3n1 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme3n1 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:25.876 09:53:15 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:25.876 09:53:15 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:25.876 09:53:15 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:25.876 09:53:15 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:25.876 09:53:15 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:25.876 09:53:15 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:25.876 09:53:15 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.876 09:53:15 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:26.812 09:53:16 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:26.812 09:53:16 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:26.812 09:53:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:26.812 09:53:16 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:26.812 09:53:16 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.812 09:53:16 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:27.378 09:53:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:27.378 09:53:16 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:27.378 09:53:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.944 Hugepages 00:04:27.944 node hugesize free / total 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.944 00:04:27.944 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:27.944 09:53:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:28.202 09:53:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:04:28.202 09:53:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:28.202 09:53:17 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:28.202 09:53:17 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:28.202 09:53:17 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:28.202 09:53:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:28.202 09:53:17 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:28.203 09:53:17 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:28.203 09:53:17 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:28.203 09:53:17 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:28.203 09:53:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:28.203 ************************************ 00:04:28.203 START TEST denied 00:04:28.203 ************************************ 00:04:28.203 09:53:17 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:04:28.203 09:53:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:28.203 09:53:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:28.203 09:53:17 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:28.203 09:53:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.203 09:53:17 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:29.577 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:29.577 09:53:18 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:29.577 09:53:18 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:29.577 09:53:18 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:29.577 09:53:18 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:29.577 09:53:18 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:29.577 09:53:18 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:29.577 09:53:18 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:29.577 09:53:18 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:29.577 09:53:18 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.577 09:53:18 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:36.138 00:04:36.138 real 0m7.078s 00:04:36.138 user 0m0.822s 00:04:36.138 sys 0m1.300s 00:04:36.138 09:53:24 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:36.138 ************************************ 00:04:36.138 END TEST denied 00:04:36.138 ************************************ 00:04:36.138 09:53:24 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:36.138 09:53:24 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:36.138 09:53:24 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:36.138 09:53:24 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:36.138 09:53:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:36.138 ************************************ 00:04:36.138 START TEST allowed 00:04:36.138 ************************************ 00:04:36.138 09:53:24 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:04:36.138 09:53:24 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:36.138 09:53:24 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:36.138 09:53:24 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:36.138 09:53:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.138 09:53:24 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:36.398 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.398 09:53:25 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.775 00:04:37.775 real 0m2.210s 00:04:37.775 user 0m1.017s 00:04:37.775 sys 0m1.189s 00:04:37.775 09:53:26 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:37.775 09:53:26 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:37.775 ************************************ 00:04:37.775 END TEST allowed 00:04:37.775 ************************************ 00:04:37.775 ************************************ 00:04:37.775 END TEST acl 00:04:37.775 ************************************ 00:04:37.775 00:04:37.775 real 0m11.852s 00:04:37.775 user 0m3.041s 00:04:37.775 sys 0m3.858s 00:04:37.775 09:53:26 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:37.775 09:53:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:37.775 09:53:26 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:37.775 09:53:26 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:37.775 09:53:26 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:37.776 09:53:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:37.776 ************************************ 00:04:37.776 START TEST hugepages 00:04:37.776 ************************************ 00:04:37.776 09:53:26 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:37.776 * Looking for test storage... 00:04:37.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5815592 kB' 'MemAvailable: 7397860 kB' 'Buffers: 2436 kB' 'Cached: 1795496 kB' 'SwapCached: 0 kB' 'Active: 444184 kB' 'Inactive: 1455420 kB' 'Active(anon): 112184 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455420 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 103404 kB' 'Mapped: 48788 kB' 'Shmem: 10512 kB' 'KReclaimable: 63576 kB' 'Slab: 136380 kB' 'SReclaimable: 63576 kB' 'SUnreclaim: 72804 kB' 'KernelStack: 6496 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 335712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.776 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:37.777 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:37.778 09:53:27 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:37.778 09:53:27 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:37.778 09:53:27 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:37.778 09:53:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:37.778 ************************************ 00:04:37.778 START TEST default_setup 00:04:37.778 ************************************ 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.778 09:53:27 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:38.346 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.913 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.913 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.913 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.913 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7943000 kB' 'MemAvailable: 9524988 kB' 'Buffers: 2436 kB' 'Cached: 1795480 kB' 'SwapCached: 0 kB' 'Active: 461752 kB' 'Inactive: 1455428 kB' 'Active(anon): 129752 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455428 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 120708 kB' 'Mapped: 48948 kB' 'Shmem: 10472 kB' 'KReclaimable: 63000 kB' 'Slab: 135400 kB' 'SReclaimable: 63000 kB' 'SUnreclaim: 72400 kB' 'KernelStack: 6432 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.913 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7942500 kB' 'MemAvailable: 9524488 kB' 'Buffers: 2436 kB' 'Cached: 1795480 kB' 'SwapCached: 0 kB' 'Active: 461136 kB' 'Inactive: 1455428 kB' 'Active(anon): 129136 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455428 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 120244 kB' 'Mapped: 48788 kB' 'Shmem: 10472 kB' 'KReclaimable: 63000 kB' 'Slab: 135384 kB' 'SReclaimable: 63000 kB' 'SUnreclaim: 72384 kB' 'KernelStack: 6384 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.914 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.915 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7942500 kB' 'MemAvailable: 9524488 kB' 'Buffers: 2436 kB' 'Cached: 1795480 kB' 'SwapCached: 0 kB' 'Active: 461264 kB' 'Inactive: 1455428 kB' 'Active(anon): 129264 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455428 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 120368 kB' 'Mapped: 48788 kB' 'Shmem: 10472 kB' 'KReclaimable: 63000 kB' 'Slab: 135384 kB' 'SReclaimable: 63000 kB' 'SUnreclaim: 72384 kB' 'KernelStack: 6400 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.178 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.179 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:39.180 nr_hugepages=1024 00:04:39.180 resv_hugepages=0 00:04:39.180 surplus_hugepages=0 00:04:39.180 anon_hugepages=0 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7942248 kB' 'MemAvailable: 9524236 kB' 'Buffers: 2436 kB' 'Cached: 1795480 kB' 'SwapCached: 0 kB' 'Active: 461148 kB' 'Inactive: 1455428 kB' 'Active(anon): 129148 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455428 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 120248 kB' 'Mapped: 48788 kB' 'Shmem: 10472 kB' 'KReclaimable: 63000 kB' 'Slab: 135384 kB' 'SReclaimable: 63000 kB' 'SUnreclaim: 72384 kB' 'KernelStack: 6384 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.180 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.181 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.182 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7942248 kB' 'MemUsed: 4299724 kB' 'SwapCached: 0 kB' 'Active: 460932 kB' 'Inactive: 1455428 kB' 'Active(anon): 128932 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455428 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 1797916 kB' 'Mapped: 48788 kB' 'AnonPages: 120292 kB' 'Shmem: 10472 kB' 'KernelStack: 6384 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63000 kB' 'Slab: 135384 kB' 'SReclaimable: 63000 kB' 'SUnreclaim: 72384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.183 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:39.184 node0=1024 expecting 1024 00:04:39.184 ************************************ 00:04:39.184 END TEST default_setup 00:04:39.184 ************************************ 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:39.184 00:04:39.184 real 0m1.419s 00:04:39.184 user 0m0.625s 00:04:39.184 sys 0m0.744s 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:39.184 09:53:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:39.184 09:53:28 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:39.184 09:53:28 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:39.184 09:53:28 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:39.184 09:53:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.184 ************************************ 00:04:39.184 START TEST per_node_1G_alloc 00:04:39.184 ************************************ 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.184 09:53:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:39.442 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.705 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:39.705 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:39.705 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:39.705 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8993152 kB' 'MemAvailable: 10575152 kB' 'Buffers: 2436 kB' 'Cached: 1795476 kB' 'SwapCached: 0 kB' 'Active: 461364 kB' 'Inactive: 1455440 kB' 'Active(anon): 129364 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455440 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120428 kB' 'Mapped: 48888 kB' 'Shmem: 10472 kB' 'KReclaimable: 63000 kB' 'Slab: 135420 kB' 'SReclaimable: 63000 kB' 'SUnreclaim: 72420 kB' 'KernelStack: 6324 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.705 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.706 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8993424 kB' 'MemAvailable: 10575412 kB' 'Buffers: 2436 kB' 'Cached: 1795480 kB' 'SwapCached: 0 kB' 'Active: 461280 kB' 'Inactive: 1455444 kB' 'Active(anon): 129280 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120404 kB' 'Mapped: 48788 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135404 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72436 kB' 'KernelStack: 6400 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.707 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.708 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8993464 kB' 'MemAvailable: 10575452 kB' 'Buffers: 2436 kB' 'Cached: 1795480 kB' 'SwapCached: 0 kB' 'Active: 461248 kB' 'Inactive: 1455444 kB' 'Active(anon): 129248 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120404 kB' 'Mapped: 48788 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135404 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72436 kB' 'KernelStack: 6400 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.709 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.710 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.710 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.710 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.710 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.710 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.710 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.710 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.710 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.710 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.710 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.710 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.710 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.710 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.971 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.971 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.972 nr_hugepages=512 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.972 resv_hugepages=0 00:04:39.972 surplus_hugepages=0 00:04:39.972 anon_hugepages=0 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.972 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8993464 kB' 'MemAvailable: 10575452 kB' 'Buffers: 2436 kB' 'Cached: 1795480 kB' 'SwapCached: 0 kB' 'Active: 461332 kB' 'Inactive: 1455444 kB' 'Active(anon): 129332 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120444 kB' 'Mapped: 48788 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135404 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72436 kB' 'KernelStack: 6416 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8992708 kB' 'MemUsed: 3249264 kB' 'SwapCached: 0 kB' 'Active: 461280 kB' 'Inactive: 1455444 kB' 'Active(anon): 129280 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 1797916 kB' 'Mapped: 48788 kB' 'AnonPages: 120408 kB' 'Shmem: 10472 kB' 'KernelStack: 6400 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62968 kB' 'Slab: 135392 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.976 node0=512 expecting 512 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:39.976 ************************************ 00:04:39.976 END TEST per_node_1G_alloc 00:04:39.976 ************************************ 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:39.976 00:04:39.976 real 0m0.723s 00:04:39.976 user 0m0.326s 00:04:39.976 sys 0m0.402s 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:39.976 09:53:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:39.976 09:53:29 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:39.976 09:53:29 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:39.976 09:53:29 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:39.976 09:53:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.976 ************************************ 00:04:39.976 START TEST even_2G_alloc 00:04:39.976 ************************************ 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.976 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:40.235 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.499 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.499 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.499 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.499 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7944172 kB' 'MemAvailable: 9526164 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 462012 kB' 'Inactive: 1455448 kB' 'Active(anon): 130012 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121092 kB' 'Mapped: 49180 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135380 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72412 kB' 'KernelStack: 6436 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.499 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7944424 kB' 'MemAvailable: 9526416 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 461496 kB' 'Inactive: 1455448 kB' 'Active(anon): 129496 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 28 kB' 'AnonPages: 120560 kB' 'Mapped: 49088 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135376 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72408 kB' 'KernelStack: 6368 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.500 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.501 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7944424 kB' 'MemAvailable: 9526416 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 461324 kB' 'Inactive: 1455448 kB' 'Active(anon): 129324 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120420 kB' 'Mapped: 48792 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135384 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72416 kB' 'KernelStack: 6384 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.502 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.503 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:40.504 nr_hugepages=1024 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:40.504 resv_hugepages=0 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.504 surplus_hugepages=0 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.504 anon_hugepages=0 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.504 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7944424 kB' 'MemAvailable: 9526416 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 461320 kB' 'Inactive: 1455448 kB' 'Active(anon): 129320 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120420 kB' 'Mapped: 48792 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135376 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72408 kB' 'KernelStack: 6384 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.505 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7944424 kB' 'MemUsed: 4297548 kB' 'SwapCached: 0 kB' 'Active: 461232 kB' 'Inactive: 1455448 kB' 'Active(anon): 129232 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1797920 kB' 'Mapped: 48792 kB' 'AnonPages: 120376 kB' 'Shmem: 10472 kB' 'KernelStack: 6384 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62968 kB' 'Slab: 135372 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.506 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.507 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.507 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.507 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.507 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.508 node0=1024 expecting 1024 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:40.508 00:04:40.508 real 0m0.647s 00:04:40.508 user 0m0.301s 00:04:40.508 sys 0m0.383s 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:40.508 09:53:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:40.508 ************************************ 00:04:40.508 END TEST even_2G_alloc 00:04:40.508 ************************************ 00:04:40.767 09:53:30 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:40.767 09:53:30 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:40.767 09:53:30 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:40.767 09:53:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:40.767 ************************************ 00:04:40.767 START TEST odd_alloc 00:04:40.767 ************************************ 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.767 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:41.026 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.291 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.291 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.291 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.291 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7939620 kB' 'MemAvailable: 9521612 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 461472 kB' 'Inactive: 1455448 kB' 'Active(anon): 129472 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 120536 kB' 'Mapped: 48940 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135372 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72404 kB' 'KernelStack: 6416 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.291 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7939872 kB' 'MemAvailable: 9521864 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 461140 kB' 'Inactive: 1455448 kB' 'Active(anon): 129140 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 120260 kB' 'Mapped: 48932 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135368 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72400 kB' 'KernelStack: 6400 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.292 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.293 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7940136 kB' 'MemAvailable: 9522128 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 461336 kB' 'Inactive: 1455448 kB' 'Active(anon): 129336 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 120196 kB' 'Mapped: 48784 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135360 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72392 kB' 'KernelStack: 6384 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.294 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.295 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:41.296 nr_hugepages=1025 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:41.296 resv_hugepages=0 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.296 surplus_hugepages=0 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.296 anon_hugepages=0 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7940916 kB' 'MemAvailable: 9522908 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 461512 kB' 'Inactive: 1455448 kB' 'Active(anon): 129512 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 120460 kB' 'Mapped: 49044 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135360 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72392 kB' 'KernelStack: 6416 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.296 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.297 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7940976 kB' 'MemUsed: 4300996 kB' 'SwapCached: 0 kB' 'Active: 461048 kB' 'Inactive: 1455448 kB' 'Active(anon): 129048 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1797920 kB' 'Mapped: 48988 kB' 'AnonPages: 120168 kB' 'Shmem: 10472 kB' 'KernelStack: 6388 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62968 kB' 'Slab: 135356 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.298 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.299 node0=1025 expecting 1025 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:41.299 00:04:41.299 real 0m0.671s 00:04:41.299 user 0m0.348s 00:04:41.299 sys 0m0.369s 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:41.299 09:53:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:41.299 ************************************ 00:04:41.299 END TEST odd_alloc 00:04:41.299 ************************************ 00:04:41.299 09:53:30 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:41.299 09:53:30 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:41.299 09:53:30 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:41.299 09:53:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.299 ************************************ 00:04:41.299 START TEST custom_alloc 00:04:41.299 ************************************ 00:04:41.299 09:53:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:04:41.299 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:41.299 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:41.299 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:41.299 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:41.299 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:41.299 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:41.299 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:41.299 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.299 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.299 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:41.299 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.299 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.299 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.299 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:41.299 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.300 09:53:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:41.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.937 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.937 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.937 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.937 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8994936 kB' 'MemAvailable: 10576928 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 461432 kB' 'Inactive: 1455448 kB' 'Active(anon): 129432 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 120532 kB' 'Mapped: 48984 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135348 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72380 kB' 'KernelStack: 6340 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.937 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8995188 kB' 'MemAvailable: 10577180 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 461416 kB' 'Inactive: 1455448 kB' 'Active(anon): 129416 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 120516 kB' 'Mapped: 48976 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135352 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72384 kB' 'KernelStack: 6356 kB' 'PageTables: 4000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.938 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.939 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8995188 kB' 'MemAvailable: 10577180 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 461284 kB' 'Inactive: 1455448 kB' 'Active(anon): 129284 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 120364 kB' 'Mapped: 48792 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135364 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72396 kB' 'KernelStack: 6368 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.940 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.941 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:41.942 nr_hugepages=512 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:41.942 resv_hugepages=0 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.942 surplus_hugepages=0 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.942 anon_hugepages=0 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8995756 kB' 'MemAvailable: 10577748 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 461288 kB' 'Inactive: 1455448 kB' 'Active(anon): 129288 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 120408 kB' 'Mapped: 48792 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135364 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72396 kB' 'KernelStack: 6400 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.942 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.943 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8995804 kB' 'MemUsed: 3246168 kB' 'SwapCached: 0 kB' 'Active: 461300 kB' 'Inactive: 1455448 kB' 'Active(anon): 129300 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1797920 kB' 'Mapped: 48792 kB' 'AnonPages: 120396 kB' 'Shmem: 10472 kB' 'KernelStack: 6384 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62968 kB' 'Slab: 135364 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.944 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.945 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.204 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.205 node0=512 expecting 512 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:42.205 00:04:42.205 real 0m0.685s 00:04:42.205 user 0m0.352s 00:04:42.205 sys 0m0.372s 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:42.205 09:53:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:42.205 ************************************ 00:04:42.205 END TEST custom_alloc 00:04:42.205 ************************************ 00:04:42.205 09:53:31 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:42.205 09:53:31 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:42.205 09:53:31 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:42.205 09:53:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:42.205 ************************************ 00:04:42.205 START TEST no_shrink_alloc 00:04:42.205 ************************************ 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.205 09:53:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.464 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.728 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.728 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.728 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.728 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7945276 kB' 'MemAvailable: 9527268 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 461684 kB' 'Inactive: 1455448 kB' 'Active(anon): 129684 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 120824 kB' 'Mapped: 48972 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135324 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72356 kB' 'KernelStack: 6440 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7945276 kB' 'MemAvailable: 9527268 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 461092 kB' 'Inactive: 1455448 kB' 'Active(anon): 129092 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120460 kB' 'Mapped: 48792 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135352 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72384 kB' 'KernelStack: 6400 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7945276 kB' 'MemAvailable: 9527268 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 459616 kB' 'Inactive: 1455448 kB' 'Active(anon): 127616 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118876 kB' 'Mapped: 48412 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135336 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72368 kB' 'KernelStack: 6432 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:42.734 nr_hugepages=1024 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.734 resv_hugepages=0 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.734 surplus_hugepages=0 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.734 anon_hugepages=0 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7945276 kB' 'MemAvailable: 9527264 kB' 'Buffers: 2436 kB' 'Cached: 1795480 kB' 'SwapCached: 0 kB' 'Active: 459400 kB' 'Inactive: 1455444 kB' 'Active(anon): 127400 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118628 kB' 'Mapped: 48152 kB' 'Shmem: 10472 kB' 'KReclaimable: 62968 kB' 'Slab: 135328 kB' 'SReclaimable: 62968 kB' 'SUnreclaim: 72360 kB' 'KernelStack: 6384 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7945276 kB' 'MemUsed: 4296696 kB' 'SwapCached: 0 kB' 'Active: 459012 kB' 'Inactive: 1455448 kB' 'Active(anon): 127012 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1797920 kB' 'Mapped: 48052 kB' 'AnonPages: 118172 kB' 'Shmem: 10472 kB' 'KernelStack: 6320 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62952 kB' 'Slab: 135264 kB' 'SReclaimable: 62952 kB' 'SUnreclaim: 72312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.736 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.737 node0=1024 expecting 1024 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.737 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.308 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.308 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.308 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.308 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.308 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.308 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7940344 kB' 'MemAvailable: 9522328 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 460180 kB' 'Inactive: 1455448 kB' 'Active(anon): 128180 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 119352 kB' 'Mapped: 48228 kB' 'Shmem: 10472 kB' 'KReclaimable: 62952 kB' 'Slab: 135248 kB' 'SReclaimable: 62952 kB' 'SUnreclaim: 72296 kB' 'KernelStack: 6404 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.309 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7940344 kB' 'MemAvailable: 9522328 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 459500 kB' 'Inactive: 1455448 kB' 'Active(anon): 127500 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118588 kB' 'Mapped: 48100 kB' 'Shmem: 10472 kB' 'KReclaimable: 62952 kB' 'Slab: 135244 kB' 'SReclaimable: 62952 kB' 'SUnreclaim: 72292 kB' 'KernelStack: 6276 kB' 'PageTables: 3640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.310 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.311 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7940344 kB' 'MemAvailable: 9522328 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 459120 kB' 'Inactive: 1455448 kB' 'Active(anon): 127120 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118468 kB' 'Mapped: 48100 kB' 'Shmem: 10472 kB' 'KReclaimable: 62952 kB' 'Slab: 135244 kB' 'SReclaimable: 62952 kB' 'SUnreclaim: 72292 kB' 'KernelStack: 6348 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.312 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.313 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:43.314 nr_hugepages=1024 00:04:43.314 resv_hugepages=0 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.314 surplus_hugepages=0 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.314 anon_hugepages=0 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7940344 kB' 'MemAvailable: 9522328 kB' 'Buffers: 2436 kB' 'Cached: 1795484 kB' 'SwapCached: 0 kB' 'Active: 459356 kB' 'Inactive: 1455448 kB' 'Active(anon): 127356 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118472 kB' 'Mapped: 48100 kB' 'Shmem: 10472 kB' 'KReclaimable: 62952 kB' 'Slab: 135244 kB' 'SReclaimable: 62952 kB' 'SUnreclaim: 72292 kB' 'KernelStack: 6316 kB' 'PageTables: 3584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.314 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.315 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.576 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7940092 kB' 'MemUsed: 4301880 kB' 'SwapCached: 0 kB' 'Active: 459312 kB' 'Inactive: 1455452 kB' 'Active(anon): 127312 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1455452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1797924 kB' 'Mapped: 48052 kB' 'AnonPages: 118452 kB' 'Shmem: 10472 kB' 'KernelStack: 6352 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62952 kB' 'Slab: 135244 kB' 'SReclaimable: 62952 kB' 'SUnreclaim: 72292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.577 node0=1024 expecting 1024 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.577 00:04:43.577 real 0m1.366s 00:04:43.577 user 0m0.652s 00:04:43.577 sys 0m0.799s 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:43.577 09:53:32 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:43.577 ************************************ 00:04:43.577 END TEST no_shrink_alloc 00:04:43.577 ************************************ 00:04:43.577 09:53:32 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:43.577 09:53:32 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:43.577 09:53:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:43.577 09:53:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.578 09:53:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.578 09:53:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.578 09:53:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.578 09:53:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:43.578 09:53:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:43.578 ************************************ 00:04:43.578 END TEST hugepages 00:04:43.578 ************************************ 00:04:43.578 00:04:43.578 real 0m5.950s 00:04:43.578 user 0m2.751s 00:04:43.578 sys 0m3.332s 00:04:43.578 09:53:32 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:43.578 09:53:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.578 09:53:32 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:43.578 09:53:32 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:43.578 09:53:32 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:43.578 09:53:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:43.578 ************************************ 00:04:43.578 START TEST driver 00:04:43.578 ************************************ 00:04:43.578 09:53:32 setup.sh.driver -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:43.578 * Looking for test storage... 00:04:43.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:43.578 09:53:33 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:43.578 09:53:33 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.578 09:53:33 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:50.142 09:53:38 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:50.142 09:53:38 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:50.142 09:53:38 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:50.142 09:53:38 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:50.142 ************************************ 00:04:50.142 START TEST guess_driver 00:04:50.142 ************************************ 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:50.142 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:50.142 Looking for driver=uio_pci_generic 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.142 09:53:38 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:50.142 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:50.142 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:50.142 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.708 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.708 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:50.708 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.708 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.708 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:50.708 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.708 09:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.708 09:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:50.708 09:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.708 09:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.708 09:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:50.708 09:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.708 09:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:50.708 09:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:50.708 09:53:40 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.708 09:53:40 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.270 00:04:57.270 real 0m7.032s 00:04:57.270 user 0m0.762s 00:04:57.270 sys 0m1.326s 00:04:57.270 09:53:45 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:57.270 09:53:45 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:57.270 ************************************ 00:04:57.270 END TEST guess_driver 00:04:57.270 ************************************ 00:04:57.270 00:04:57.270 real 0m13.062s 00:04:57.270 user 0m1.091s 00:04:57.270 sys 0m2.132s 00:04:57.270 09:53:46 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:57.270 09:53:46 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:57.270 ************************************ 00:04:57.270 END TEST driver 00:04:57.270 ************************************ 00:04:57.270 09:53:46 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:57.270 09:53:46 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:57.270 09:53:46 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:57.270 09:53:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:57.270 ************************************ 00:04:57.270 START TEST devices 00:04:57.270 ************************************ 00:04:57.270 09:53:46 setup.sh.devices -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:57.270 * Looking for test storage... 00:04:57.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:57.270 09:53:46 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:57.270 09:53:46 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:57.270 09:53:46 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.270 09:53:46 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.866 09:53:47 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:57.866 09:53:47 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:57.866 09:53:47 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:57.866 09:53:47 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:57.866 09:53:47 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:57.866 09:53:47 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:57.866 09:53:47 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:57.866 09:53:47 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:57.866 09:53:47 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:57.866 09:53:47 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:57.866 09:53:47 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n1 00:04:57.866 09:53:47 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:04:57.866 09:53:47 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:57.866 09:53:47 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n1 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme2n1 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n2 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme2n2 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n3 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme2n3 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme3c3n1 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme3c3n1 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme3n1 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme3n1 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:57.867 09:53:47 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:57.867 09:53:47 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:57.867 09:53:47 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:57.867 No valid GPT data, bailing 00:04:57.867 09:53:47 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:57.867 09:53:47 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:57.867 09:53:47 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:57.867 09:53:47 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:57.867 09:53:47 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:57.867 09:53:47 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:57.867 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:57.867 09:53:47 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:57.867 09:53:47 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:58.155 No valid GPT data, bailing 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:58.155 09:53:47 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:58.155 09:53:47 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:58.155 09:53:47 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:58.155 No valid GPT data, bailing 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:58.155 09:53:47 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:58.155 09:53:47 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:58.155 09:53:47 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:04:58.155 No valid GPT data, bailing 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:04:58.155 09:53:47 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:04:58.155 09:53:47 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:04:58.155 09:53:47 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:04:58.155 No valid GPT data, bailing 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:58.155 09:53:47 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:04:58.155 09:53:47 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:04:58.155 09:53:47 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:04:58.155 09:53:47 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:58.155 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:58.156 09:53:47 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:58.156 09:53:47 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:58.156 09:53:47 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:58.156 09:53:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:58.156 09:53:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:58.156 09:53:47 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:04:58.156 09:53:47 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:58.156 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:58.156 09:53:47 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:04:58.156 09:53:47 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:58.414 No valid GPT data, bailing 00:04:58.414 09:53:47 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:58.414 09:53:47 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:58.414 09:53:47 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:58.414 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:58.414 09:53:47 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:58.414 09:53:47 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:58.414 09:53:47 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:04:58.414 09:53:47 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:58.414 09:53:47 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:58.414 09:53:47 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:58.414 09:53:47 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:58.414 09:53:47 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:58.414 09:53:47 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:58.414 09:53:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:58.414 ************************************ 00:04:58.414 START TEST nvme_mount 00:04:58.414 ************************************ 00:04:58.414 09:53:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:04:58.414 09:53:47 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:58.414 09:53:47 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:58.414 09:53:47 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.414 09:53:47 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:58.415 09:53:47 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:58.415 09:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:58.415 09:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:58.415 09:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:58.415 09:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:58.415 09:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:58.415 09:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:58.415 09:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:58.415 09:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.415 09:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.415 09:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:58.415 09:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.415 09:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:58.415 09:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:58.415 09:53:47 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:59.352 Creating new GPT entries in memory. 00:04:59.352 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:59.352 other utilities. 00:04:59.352 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:59.352 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.352 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:59.352 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:59.352 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:00.289 Creating new GPT entries in memory. 00:05:00.289 The operation has completed successfully. 00:05:00.289 09:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:00.289 09:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.289 09:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59450 00:05:00.289 09:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.289 09:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:00.289 09:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.289 09:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:00.289 09:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:00.289 09:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.548 09:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:00.548 09:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:00.548 09:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:00.548 09:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.548 09:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:00.548 09:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.548 09:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:00.548 09:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:00.548 09:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.549 09:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.549 09:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:00.549 09:53:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.549 09:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.549 09:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.549 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.549 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:00.549 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:00.549 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.549 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.549 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.808 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.808 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.808 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.808 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.808 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.808 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.067 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:01.067 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.326 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.326 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:01.326 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.326 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:01.326 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:01.326 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:01.326 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.326 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.326 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.326 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:01.326 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.326 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.326 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.585 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.585 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.585 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:01.585 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.585 09:53:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:01.844 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:01.844 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:01.844 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:01.844 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.844 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:01.844 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.103 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:02.103 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.103 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:02.103 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.103 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:02.103 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.362 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:02.362 09:53:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.619 09:53:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:02.877 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:02.877 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:02.877 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:02.877 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.877 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:02.877 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.135 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:03.135 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.135 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:03.135 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.135 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:03.135 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.394 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:03.394 09:53:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.653 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.653 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:03.653 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:03.653 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:03.653 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:03.653 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:03.653 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:03.653 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:03.653 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:03.653 00:05:03.653 real 0m5.356s 00:05:03.653 user 0m1.517s 00:05:03.653 sys 0m1.543s 00:05:03.653 09:53:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:03.653 09:53:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:03.653 ************************************ 00:05:03.653 END TEST nvme_mount 00:05:03.653 ************************************ 00:05:03.653 09:53:53 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:03.653 09:53:53 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:03.653 09:53:53 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:03.653 09:53:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:03.653 ************************************ 00:05:03.653 START TEST dm_mount 00:05:03.653 ************************************ 00:05:03.653 09:53:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:05:03.653 09:53:53 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:03.653 09:53:53 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:03.653 09:53:53 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:03.653 09:53:53 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:03.653 09:53:53 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:03.653 09:53:53 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:03.653 09:53:53 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:03.653 09:53:53 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:03.653 09:53:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:03.653 09:53:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:03.653 09:53:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:03.653 09:53:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.653 09:53:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.654 09:53:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:03.654 09:53:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.654 09:53:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.654 09:53:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:03.654 09:53:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.654 09:53:53 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:03.654 09:53:53 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:03.654 09:53:53 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:05.029 Creating new GPT entries in memory. 00:05:05.029 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:05.029 other utilities. 00:05:05.029 09:53:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:05.029 09:53:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:05.029 09:53:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:05.029 09:53:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:05.029 09:53:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:05.966 Creating new GPT entries in memory. 00:05:05.966 The operation has completed successfully. 00:05:05.966 09:53:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:05.966 09:53:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:05.966 09:53:55 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:05.966 09:53:55 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:05.966 09:53:55 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:06.902 The operation has completed successfully. 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60075 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.902 09:53:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:07.160 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:07.160 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:07.160 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:07.160 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.160 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:07.160 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.160 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:07.160 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.417 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:07.417 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.417 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:07.417 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.675 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:07.675 09:53:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:07.934 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.202 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:08.202 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.202 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:08.202 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.202 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:08.202 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.460 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:08.460 09:53:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.718 09:53:58 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.718 09:53:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:08.718 09:53:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:08.718 09:53:58 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:08.718 09:53:58 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:08.718 09:53:58 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:08.718 09:53:58 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:08.718 09:53:58 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.718 09:53:58 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:08.718 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:08.718 09:53:58 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:08.718 09:53:58 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:08.718 00:05:08.718 real 0m5.074s 00:05:08.718 user 0m0.926s 00:05:08.718 sys 0m1.079s 00:05:08.718 09:53:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:08.718 09:53:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:08.718 ************************************ 00:05:08.718 END TEST dm_mount 00:05:08.718 ************************************ 00:05:08.718 09:53:58 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:08.718 09:53:58 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:08.718 09:53:58 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.718 09:53:58 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.718 09:53:58 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:08.976 09:53:58 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.976 09:53:58 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:09.235 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:09.235 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:09.235 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:09.235 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:09.235 09:53:58 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:09.235 09:53:58 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:09.235 09:53:58 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:09.235 09:53:58 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:09.235 09:53:58 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:09.235 09:53:58 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:09.235 09:53:58 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:09.235 ************************************ 00:05:09.235 END TEST devices 00:05:09.235 ************************************ 00:05:09.235 00:05:09.235 real 0m12.433s 00:05:09.235 user 0m3.388s 00:05:09.235 sys 0m3.380s 00:05:09.235 09:53:58 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:09.235 09:53:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:09.235 00:05:09.235 real 0m43.576s 00:05:09.235 user 0m10.380s 00:05:09.235 sys 0m12.858s 00:05:09.235 09:53:58 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:09.235 09:53:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:09.235 ************************************ 00:05:09.235 END TEST setup.sh 00:05:09.235 ************************************ 00:05:09.235 09:53:58 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:09.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.059 Hugepages 00:05:10.059 node hugesize free / total 00:05:10.059 node0 1048576kB 0 / 0 00:05:10.059 node0 2048kB 2048 / 2048 00:05:10.059 00:05:10.059 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:10.059 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:10.317 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:10.317 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:10.317 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:10.317 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:10.574 09:53:59 -- spdk/autotest.sh@130 -- # uname -s 00:05:10.574 09:53:59 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:10.574 09:53:59 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:10.574 09:53:59 -- common/autotest_common.sh@1530 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:10.832 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.399 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.399 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.399 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.657 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.657 09:54:01 -- common/autotest_common.sh@1531 -- # sleep 1 00:05:12.591 09:54:02 -- common/autotest_common.sh@1532 -- # bdfs=() 00:05:12.591 09:54:02 -- common/autotest_common.sh@1532 -- # local bdfs 00:05:12.591 09:54:02 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:05:12.591 09:54:02 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:05:12.591 09:54:02 -- common/autotest_common.sh@1512 -- # bdfs=() 00:05:12.591 09:54:02 -- common/autotest_common.sh@1512 -- # local bdfs 00:05:12.591 09:54:02 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.591 09:54:02 -- common/autotest_common.sh@1513 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:12.591 09:54:02 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:05:12.591 09:54:02 -- common/autotest_common.sh@1514 -- # (( 4 == 0 )) 00:05:12.591 09:54:02 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:12.591 09:54:02 -- common/autotest_common.sh@1535 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.156 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.156 Waiting for block devices as requested 00:05:13.156 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:13.414 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:13.414 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:13.414 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:18.682 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:18.682 09:54:07 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:05:18.682 09:54:07 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:18.682 09:54:07 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:18.682 09:54:07 -- common/autotest_common.sh@1501 -- # grep 0000:00:10.0/nvme/nvme 00:05:18.682 09:54:07 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:18.682 09:54:07 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:18.682 09:54:07 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:18.682 09:54:07 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme1 00:05:18.682 09:54:07 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme1 00:05:18.682 09:54:07 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme1 ]] 00:05:18.682 09:54:07 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme1 00:05:18.682 09:54:07 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:05:18.682 09:54:07 -- common/autotest_common.sh@1544 -- # grep oacs 00:05:18.682 09:54:07 -- common/autotest_common.sh@1544 -- # oacs=' 0x12a' 00:05:18.682 09:54:07 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:05:18.682 09:54:07 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:05:18.682 09:54:07 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme1 00:05:18.682 09:54:07 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:05:18.682 09:54:07 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:05:18.682 09:54:07 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:05:18.682 09:54:07 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:05:18.682 09:54:07 -- common/autotest_common.sh@1556 -- # continue 00:05:18.682 09:54:07 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:05:18.682 09:54:07 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:18.682 09:54:07 -- common/autotest_common.sh@1501 -- # grep 0000:00:11.0/nvme/nvme 00:05:18.683 09:54:07 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:18.683 09:54:08 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:18.683 09:54:08 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:18.683 09:54:08 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:18.683 09:54:08 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:05:18.683 09:54:08 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:05:18.683 09:54:08 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:05:18.683 09:54:08 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:05:18.683 09:54:08 -- common/autotest_common.sh@1544 -- # grep oacs 00:05:18.683 09:54:08 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:05:18.683 09:54:08 -- common/autotest_common.sh@1544 -- # oacs=' 0x12a' 00:05:18.683 09:54:08 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:05:18.683 09:54:08 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:05:18.683 09:54:08 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:05:18.683 09:54:08 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:05:18.683 09:54:08 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:05:18.683 09:54:08 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:05:18.683 09:54:08 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:05:18.683 09:54:08 -- common/autotest_common.sh@1556 -- # continue 00:05:18.683 09:54:08 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:05:18.683 09:54:08 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:18.683 09:54:08 -- common/autotest_common.sh@1501 -- # grep 0000:00:12.0/nvme/nvme 00:05:18.683 09:54:08 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:18.683 09:54:08 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:18.683 09:54:08 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:18.683 09:54:08 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:18.683 09:54:08 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme2 00:05:18.683 09:54:08 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme2 00:05:18.683 09:54:08 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme2 ]] 00:05:18.683 09:54:08 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme2 00:05:18.683 09:54:08 -- common/autotest_common.sh@1544 -- # grep oacs 00:05:18.683 09:54:08 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:05:18.683 09:54:08 -- common/autotest_common.sh@1544 -- # oacs=' 0x12a' 00:05:18.683 09:54:08 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:05:18.683 09:54:08 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:05:18.683 09:54:08 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme2 00:05:18.683 09:54:08 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:05:18.683 09:54:08 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:05:18.683 09:54:08 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:05:18.683 09:54:08 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:05:18.683 09:54:08 -- common/autotest_common.sh@1556 -- # continue 00:05:18.683 09:54:08 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:05:18.683 09:54:08 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:18.683 09:54:08 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:18.683 09:54:08 -- common/autotest_common.sh@1501 -- # grep 0000:00:13.0/nvme/nvme 00:05:18.683 09:54:08 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:18.683 09:54:08 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:18.683 09:54:08 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:18.683 09:54:08 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme3 00:05:18.683 09:54:08 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme3 00:05:18.683 09:54:08 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme3 ]] 00:05:18.683 09:54:08 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme3 00:05:18.683 09:54:08 -- common/autotest_common.sh@1544 -- # grep oacs 00:05:18.683 09:54:08 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:05:18.683 09:54:08 -- common/autotest_common.sh@1544 -- # oacs=' 0x12a' 00:05:18.683 09:54:08 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:05:18.683 09:54:08 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:05:18.683 09:54:08 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme3 00:05:18.683 09:54:08 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:05:18.683 09:54:08 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:05:18.683 09:54:08 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:05:18.683 09:54:08 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:05:18.683 09:54:08 -- common/autotest_common.sh@1556 -- # continue 00:05:18.683 09:54:08 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:18.683 09:54:08 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:18.683 09:54:08 -- common/autotest_common.sh@10 -- # set +x 00:05:18.683 09:54:08 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:18.683 09:54:08 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:18.683 09:54:08 -- common/autotest_common.sh@10 -- # set +x 00:05:18.683 09:54:08 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.250 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.817 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.817 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.817 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.817 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:20.075 09:54:09 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:20.075 09:54:09 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:20.075 09:54:09 -- common/autotest_common.sh@10 -- # set +x 00:05:20.075 09:54:09 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:20.075 09:54:09 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:05:20.075 09:54:09 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:05:20.075 09:54:09 -- common/autotest_common.sh@1576 -- # bdfs=() 00:05:20.075 09:54:09 -- common/autotest_common.sh@1576 -- # local bdfs 00:05:20.075 09:54:09 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:05:20.075 09:54:09 -- common/autotest_common.sh@1512 -- # bdfs=() 00:05:20.075 09:54:09 -- common/autotest_common.sh@1512 -- # local bdfs 00:05:20.075 09:54:09 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:20.075 09:54:09 -- common/autotest_common.sh@1513 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:20.075 09:54:09 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:05:20.075 09:54:09 -- common/autotest_common.sh@1514 -- # (( 4 == 0 )) 00:05:20.075 09:54:09 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:20.075 09:54:09 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:05:20.075 09:54:09 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:20.075 09:54:09 -- common/autotest_common.sh@1579 -- # device=0x0010 00:05:20.075 09:54:09 -- common/autotest_common.sh@1580 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:20.075 09:54:09 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:05:20.075 09:54:09 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:20.075 09:54:09 -- common/autotest_common.sh@1579 -- # device=0x0010 00:05:20.075 09:54:09 -- common/autotest_common.sh@1580 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:20.075 09:54:09 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:05:20.075 09:54:09 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:20.075 09:54:09 -- common/autotest_common.sh@1579 -- # device=0x0010 00:05:20.075 09:54:09 -- common/autotest_common.sh@1580 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:20.075 09:54:09 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:05:20.075 09:54:09 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:20.075 09:54:09 -- common/autotest_common.sh@1579 -- # device=0x0010 00:05:20.075 09:54:09 -- common/autotest_common.sh@1580 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:20.075 09:54:09 -- common/autotest_common.sh@1585 -- # printf '%s\n' 00:05:20.075 09:54:09 -- common/autotest_common.sh@1591 -- # [[ -z '' ]] 00:05:20.075 09:54:09 -- common/autotest_common.sh@1592 -- # return 0 00:05:20.075 09:54:09 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:20.075 09:54:09 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:20.075 09:54:09 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:20.075 09:54:09 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:20.075 09:54:09 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:20.075 09:54:09 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:20.075 09:54:09 -- common/autotest_common.sh@10 -- # set +x 00:05:20.075 09:54:09 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:20.075 09:54:09 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:20.075 09:54:09 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:20.075 09:54:09 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:20.075 09:54:09 -- common/autotest_common.sh@10 -- # set +x 00:05:20.075 ************************************ 00:05:20.075 START TEST env 00:05:20.075 ************************************ 00:05:20.075 09:54:09 env -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:20.075 * Looking for test storage... 00:05:20.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:20.075 09:54:09 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:20.075 09:54:09 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:20.075 09:54:09 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:20.075 09:54:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.075 ************************************ 00:05:20.075 START TEST env_memory 00:05:20.075 ************************************ 00:05:20.075 09:54:09 env.env_memory -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:20.333 00:05:20.333 00:05:20.333 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.333 http://cunit.sourceforge.net/ 00:05:20.333 00:05:20.333 00:05:20.333 Suite: memory 00:05:20.333 Test: alloc and free memory map ...[2024-06-10 09:54:09.667273] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:20.333 passed 00:05:20.333 Test: mem map translation ...[2024-06-10 09:54:09.748596] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:20.333 [2024-06-10 09:54:09.748708] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:20.333 [2024-06-10 09:54:09.748852] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:20.333 [2024-06-10 09:54:09.748895] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:20.333 passed 00:05:20.333 Test: mem map registration ...[2024-06-10 09:54:09.847831] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:20.333 [2024-06-10 09:54:09.847913] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:20.591 passed 00:05:20.591 Test: mem map adjacent registrations ...passed 00:05:20.591 00:05:20.591 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.591 suites 1 1 n/a 0 0 00:05:20.591 tests 4 4 4 0 0 00:05:20.591 asserts 152 152 152 0 n/a 00:05:20.591 00:05:20.591 Elapsed time = 0.377 seconds 00:05:20.591 00:05:20.591 real 0m0.420s 00:05:20.591 user 0m0.382s 00:05:20.591 sys 0m0.033s 00:05:20.591 09:54:09 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:20.591 09:54:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:20.591 ************************************ 00:05:20.591 END TEST env_memory 00:05:20.591 ************************************ 00:05:20.591 09:54:10 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:20.592 09:54:10 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:20.592 09:54:10 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:20.592 09:54:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.592 ************************************ 00:05:20.592 START TEST env_vtophys 00:05:20.592 ************************************ 00:05:20.592 09:54:10 env.env_vtophys -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:20.592 EAL: lib.eal log level changed from notice to debug 00:05:20.592 EAL: Detected lcore 0 as core 0 on socket 0 00:05:20.592 EAL: Detected lcore 1 as core 0 on socket 0 00:05:20.592 EAL: Detected lcore 2 as core 0 on socket 0 00:05:20.592 EAL: Detected lcore 3 as core 0 on socket 0 00:05:20.592 EAL: Detected lcore 4 as core 0 on socket 0 00:05:20.592 EAL: Detected lcore 5 as core 0 on socket 0 00:05:20.592 EAL: Detected lcore 6 as core 0 on socket 0 00:05:20.592 EAL: Detected lcore 7 as core 0 on socket 0 00:05:20.592 EAL: Detected lcore 8 as core 0 on socket 0 00:05:20.592 EAL: Detected lcore 9 as core 0 on socket 0 00:05:20.592 EAL: Maximum logical cores by configuration: 128 00:05:20.592 EAL: Detected CPU lcores: 10 00:05:20.592 EAL: Detected NUMA nodes: 1 00:05:20.592 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:20.592 EAL: Detected shared linkage of DPDK 00:05:20.850 EAL: No shared files mode enabled, IPC will be disabled 00:05:20.850 EAL: Selected IOVA mode 'PA' 00:05:20.850 EAL: Probing VFIO support... 00:05:20.851 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:20.851 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:20.851 EAL: Ask a virtual area of 0x2e000 bytes 00:05:20.851 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:20.851 EAL: Setting up physically contiguous memory... 00:05:20.851 EAL: Setting maximum number of open files to 524288 00:05:20.851 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:20.851 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:20.851 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.851 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:20.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.851 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.851 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:20.851 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:20.851 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.851 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:20.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.851 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.851 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:20.851 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:20.851 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.851 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:20.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.851 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.851 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:20.851 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:20.851 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.851 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:20.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.851 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.851 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:20.851 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:20.851 EAL: Hugepages will be freed exactly as allocated. 00:05:20.851 EAL: No shared files mode enabled, IPC is disabled 00:05:20.851 EAL: No shared files mode enabled, IPC is disabled 00:05:20.851 EAL: TSC frequency is ~2200000 KHz 00:05:20.851 EAL: Main lcore 0 is ready (tid=7f64f9a0ea40;cpuset=[0]) 00:05:20.851 EAL: Trying to obtain current memory policy. 00:05:20.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.851 EAL: Restoring previous memory policy: 0 00:05:20.851 EAL: request: mp_malloc_sync 00:05:20.851 EAL: No shared files mode enabled, IPC is disabled 00:05:20.851 EAL: Heap on socket 0 was expanded by 2MB 00:05:20.851 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:20.851 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:20.851 EAL: Mem event callback 'spdk:(nil)' registered 00:05:20.851 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:20.851 00:05:20.851 00:05:20.851 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.851 http://cunit.sourceforge.net/ 00:05:20.851 00:05:20.851 00:05:20.851 Suite: components_suite 00:05:21.419 Test: vtophys_malloc_test ...passed 00:05:21.419 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:21.419 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.419 EAL: Restoring previous memory policy: 4 00:05:21.419 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.419 EAL: request: mp_malloc_sync 00:05:21.419 EAL: No shared files mode enabled, IPC is disabled 00:05:21.419 EAL: Heap on socket 0 was expanded by 4MB 00:05:21.419 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.419 EAL: request: mp_malloc_sync 00:05:21.419 EAL: No shared files mode enabled, IPC is disabled 00:05:21.419 EAL: Heap on socket 0 was shrunk by 4MB 00:05:21.419 EAL: Trying to obtain current memory policy. 00:05:21.419 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.419 EAL: Restoring previous memory policy: 4 00:05:21.419 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.419 EAL: request: mp_malloc_sync 00:05:21.419 EAL: No shared files mode enabled, IPC is disabled 00:05:21.419 EAL: Heap on socket 0 was expanded by 6MB 00:05:21.419 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.419 EAL: request: mp_malloc_sync 00:05:21.419 EAL: No shared files mode enabled, IPC is disabled 00:05:21.419 EAL: Heap on socket 0 was shrunk by 6MB 00:05:21.419 EAL: Trying to obtain current memory policy. 00:05:21.419 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.419 EAL: Restoring previous memory policy: 4 00:05:21.419 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.419 EAL: request: mp_malloc_sync 00:05:21.419 EAL: No shared files mode enabled, IPC is disabled 00:05:21.419 EAL: Heap on socket 0 was expanded by 10MB 00:05:21.419 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.419 EAL: request: mp_malloc_sync 00:05:21.419 EAL: No shared files mode enabled, IPC is disabled 00:05:21.419 EAL: Heap on socket 0 was shrunk by 10MB 00:05:21.419 EAL: Trying to obtain current memory policy. 00:05:21.419 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.419 EAL: Restoring previous memory policy: 4 00:05:21.419 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.419 EAL: request: mp_malloc_sync 00:05:21.419 EAL: No shared files mode enabled, IPC is disabled 00:05:21.419 EAL: Heap on socket 0 was expanded by 18MB 00:05:21.419 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.419 EAL: request: mp_malloc_sync 00:05:21.419 EAL: No shared files mode enabled, IPC is disabled 00:05:21.420 EAL: Heap on socket 0 was shrunk by 18MB 00:05:21.420 EAL: Trying to obtain current memory policy. 00:05:21.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.420 EAL: Restoring previous memory policy: 4 00:05:21.420 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.420 EAL: request: mp_malloc_sync 00:05:21.420 EAL: No shared files mode enabled, IPC is disabled 00:05:21.420 EAL: Heap on socket 0 was expanded by 34MB 00:05:21.420 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.420 EAL: request: mp_malloc_sync 00:05:21.420 EAL: No shared files mode enabled, IPC is disabled 00:05:21.420 EAL: Heap on socket 0 was shrunk by 34MB 00:05:21.420 EAL: Trying to obtain current memory policy. 00:05:21.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.420 EAL: Restoring previous memory policy: 4 00:05:21.420 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.420 EAL: request: mp_malloc_sync 00:05:21.420 EAL: No shared files mode enabled, IPC is disabled 00:05:21.420 EAL: Heap on socket 0 was expanded by 66MB 00:05:21.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.687 EAL: request: mp_malloc_sync 00:05:21.687 EAL: No shared files mode enabled, IPC is disabled 00:05:21.687 EAL: Heap on socket 0 was shrunk by 66MB 00:05:21.687 EAL: Trying to obtain current memory policy. 00:05:21.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.687 EAL: Restoring previous memory policy: 4 00:05:21.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.687 EAL: request: mp_malloc_sync 00:05:21.687 EAL: No shared files mode enabled, IPC is disabled 00:05:21.687 EAL: Heap on socket 0 was expanded by 130MB 00:05:21.971 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.971 EAL: request: mp_malloc_sync 00:05:21.971 EAL: No shared files mode enabled, IPC is disabled 00:05:21.971 EAL: Heap on socket 0 was shrunk by 130MB 00:05:21.971 EAL: Trying to obtain current memory policy. 00:05:21.971 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.234 EAL: Restoring previous memory policy: 4 00:05:22.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.234 EAL: request: mp_malloc_sync 00:05:22.234 EAL: No shared files mode enabled, IPC is disabled 00:05:22.234 EAL: Heap on socket 0 was expanded by 258MB 00:05:22.493 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.493 EAL: request: mp_malloc_sync 00:05:22.493 EAL: No shared files mode enabled, IPC is disabled 00:05:22.493 EAL: Heap on socket 0 was shrunk by 258MB 00:05:23.060 EAL: Trying to obtain current memory policy. 00:05:23.060 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.060 EAL: Restoring previous memory policy: 4 00:05:23.060 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.060 EAL: request: mp_malloc_sync 00:05:23.060 EAL: No shared files mode enabled, IPC is disabled 00:05:23.060 EAL: Heap on socket 0 was expanded by 514MB 00:05:23.626 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.885 EAL: request: mp_malloc_sync 00:05:23.885 EAL: No shared files mode enabled, IPC is disabled 00:05:23.885 EAL: Heap on socket 0 was shrunk by 514MB 00:05:24.453 EAL: Trying to obtain current memory policy. 00:05:24.453 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.712 EAL: Restoring previous memory policy: 4 00:05:24.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.712 EAL: request: mp_malloc_sync 00:05:24.712 EAL: No shared files mode enabled, IPC is disabled 00:05:24.712 EAL: Heap on socket 0 was expanded by 1026MB 00:05:26.086 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.344 EAL: request: mp_malloc_sync 00:05:26.344 EAL: No shared files mode enabled, IPC is disabled 00:05:26.344 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:27.719 passed 00:05:27.719 00:05:27.719 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.719 suites 1 1 n/a 0 0 00:05:27.719 tests 2 2 2 0 0 00:05:27.719 asserts 5285 5285 5285 0 n/a 00:05:27.719 00:05:27.719 Elapsed time = 6.838 seconds 00:05:27.719 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.719 EAL: request: mp_malloc_sync 00:05:27.719 EAL: No shared files mode enabled, IPC is disabled 00:05:27.719 EAL: Heap on socket 0 was shrunk by 2MB 00:05:27.719 EAL: No shared files mode enabled, IPC is disabled 00:05:27.719 EAL: No shared files mode enabled, IPC is disabled 00:05:27.719 EAL: No shared files mode enabled, IPC is disabled 00:05:27.719 00:05:27.719 real 0m7.151s 00:05:27.719 user 0m6.302s 00:05:27.719 sys 0m0.691s 00:05:27.719 09:54:17 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:27.719 09:54:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:27.719 ************************************ 00:05:27.719 END TEST env_vtophys 00:05:27.719 ************************************ 00:05:27.719 09:54:17 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:27.719 09:54:17 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:27.719 09:54:17 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:27.719 09:54:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.978 ************************************ 00:05:27.978 START TEST env_pci 00:05:27.978 ************************************ 00:05:27.978 09:54:17 env.env_pci -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:27.978 00:05:27.978 00:05:27.978 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.978 http://cunit.sourceforge.net/ 00:05:27.978 00:05:27.978 00:05:27.978 Suite: pci 00:05:27.978 Test: pci_hook ...[2024-06-10 09:54:17.284807] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61898 has claimed it 00:05:27.978 passed 00:05:27.978 00:05:27.978 EAL: Cannot find device (10000:00:01.0) 00:05:27.978 EAL: Failed to attach device on primary process 00:05:27.978 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.978 suites 1 1 n/a 0 0 00:05:27.978 tests 1 1 1 0 0 00:05:27.978 asserts 25 25 25 0 n/a 00:05:27.978 00:05:27.978 Elapsed time = 0.008 seconds 00:05:27.978 00:05:27.978 real 0m0.085s 00:05:27.978 user 0m0.047s 00:05:27.978 sys 0m0.037s 00:05:27.978 09:54:17 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:27.978 09:54:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:27.978 ************************************ 00:05:27.978 END TEST env_pci 00:05:27.978 ************************************ 00:05:27.978 09:54:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:27.978 09:54:17 env -- env/env.sh@15 -- # uname 00:05:27.978 09:54:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:27.978 09:54:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:27.978 09:54:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:27.978 09:54:17 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:05:27.978 09:54:17 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:27.978 09:54:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.978 ************************************ 00:05:27.978 START TEST env_dpdk_post_init 00:05:27.978 ************************************ 00:05:27.978 09:54:17 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:27.978 EAL: Detected CPU lcores: 10 00:05:27.978 EAL: Detected NUMA nodes: 1 00:05:27.978 EAL: Detected shared linkage of DPDK 00:05:27.978 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:27.978 EAL: Selected IOVA mode 'PA' 00:05:28.237 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:28.237 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:28.237 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:28.237 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:28.237 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:28.237 Starting DPDK initialization... 00:05:28.237 Starting SPDK post initialization... 00:05:28.237 SPDK NVMe probe 00:05:28.237 Attaching to 0000:00:10.0 00:05:28.237 Attaching to 0000:00:11.0 00:05:28.237 Attaching to 0000:00:12.0 00:05:28.237 Attaching to 0000:00:13.0 00:05:28.237 Attached to 0000:00:13.0 00:05:28.237 Attached to 0000:00:10.0 00:05:28.237 Attached to 0000:00:11.0 00:05:28.237 Attached to 0000:00:12.0 00:05:28.237 Cleaning up... 00:05:28.237 00:05:28.237 real 0m0.287s 00:05:28.237 user 0m0.100s 00:05:28.237 sys 0m0.090s 00:05:28.237 09:54:17 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:28.237 09:54:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.237 ************************************ 00:05:28.237 END TEST env_dpdk_post_init 00:05:28.237 ************************************ 00:05:28.237 09:54:17 env -- env/env.sh@26 -- # uname 00:05:28.237 09:54:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:28.237 09:54:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.237 09:54:17 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:28.237 09:54:17 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:28.237 09:54:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.237 ************************************ 00:05:28.237 START TEST env_mem_callbacks 00:05:28.237 ************************************ 00:05:28.237 09:54:17 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.496 EAL: Detected CPU lcores: 10 00:05:28.496 EAL: Detected NUMA nodes: 1 00:05:28.496 EAL: Detected shared linkage of DPDK 00:05:28.496 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.496 EAL: Selected IOVA mode 'PA' 00:05:28.496 00:05:28.496 00:05:28.496 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.496 http://cunit.sourceforge.net/ 00:05:28.496 00:05:28.496 00:05:28.496 Suite: memory 00:05:28.496 Test: test ... 00:05:28.496 register 0x200000200000 2097152 00:05:28.496 malloc 3145728 00:05:28.496 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:28.496 register 0x200000400000 4194304 00:05:28.496 buf 0x2000004fffc0 len 3145728 PASSED 00:05:28.496 malloc 64 00:05:28.496 buf 0x2000004ffec0 len 64 PASSED 00:05:28.496 malloc 4194304 00:05:28.496 register 0x200000800000 6291456 00:05:28.496 buf 0x2000009fffc0 len 4194304 PASSED 00:05:28.496 free 0x2000004fffc0 3145728 00:05:28.496 free 0x2000004ffec0 64 00:05:28.496 unregister 0x200000400000 4194304 PASSED 00:05:28.496 free 0x2000009fffc0 4194304 00:05:28.496 unregister 0x200000800000 6291456 PASSED 00:05:28.496 malloc 8388608 00:05:28.496 register 0x200000400000 10485760 00:05:28.496 buf 0x2000005fffc0 len 8388608 PASSED 00:05:28.496 free 0x2000005fffc0 8388608 00:05:28.496 unregister 0x200000400000 10485760 PASSED 00:05:28.496 passed 00:05:28.496 00:05:28.496 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.496 suites 1 1 n/a 0 0 00:05:28.496 tests 1 1 1 0 0 00:05:28.496 asserts 15 15 15 0 n/a 00:05:28.496 00:05:28.496 Elapsed time = 0.069 seconds 00:05:28.496 00:05:28.496 real 0m0.272s 00:05:28.496 user 0m0.104s 00:05:28.496 sys 0m0.066s 00:05:28.496 09:54:17 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:28.496 ************************************ 00:05:28.496 END TEST env_mem_callbacks 00:05:28.496 ************************************ 00:05:28.496 09:54:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:28.755 00:05:28.755 real 0m8.548s 00:05:28.755 user 0m7.048s 00:05:28.755 sys 0m1.119s 00:05:28.755 09:54:18 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:28.755 ************************************ 00:05:28.755 END TEST env 00:05:28.755 09:54:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.755 ************************************ 00:05:28.755 09:54:18 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:28.755 09:54:18 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:28.755 09:54:18 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:28.755 09:54:18 -- common/autotest_common.sh@10 -- # set +x 00:05:28.755 ************************************ 00:05:28.755 START TEST rpc 00:05:28.755 ************************************ 00:05:28.755 09:54:18 rpc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:28.755 * Looking for test storage... 00:05:28.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:28.755 09:54:18 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62017 00:05:28.755 09:54:18 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:28.755 09:54:18 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.755 09:54:18 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62017 00:05:28.755 09:54:18 rpc -- common/autotest_common.sh@830 -- # '[' -z 62017 ']' 00:05:28.755 09:54:18 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.755 09:54:18 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:28.755 09:54:18 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.755 09:54:18 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:28.755 09:54:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.014 [2024-06-10 09:54:18.282435] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:29.014 [2024-06-10 09:54:18.282596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62017 ] 00:05:29.014 [2024-06-10 09:54:18.456174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.273 [2024-06-10 09:54:18.685669] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:29.273 [2024-06-10 09:54:18.685740] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62017' to capture a snapshot of events at runtime. 00:05:29.273 [2024-06-10 09:54:18.685762] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:29.273 [2024-06-10 09:54:18.685781] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:29.273 [2024-06-10 09:54:18.685796] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62017 for offline analysis/debug. 00:05:29.273 [2024-06-10 09:54:18.685853] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.210 09:54:19 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:30.210 09:54:19 rpc -- common/autotest_common.sh@863 -- # return 0 00:05:30.210 09:54:19 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.210 09:54:19 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.210 09:54:19 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:30.210 09:54:19 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:30.210 09:54:19 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:30.210 09:54:19 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:30.210 09:54:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.210 ************************************ 00:05:30.210 START TEST rpc_integrity 00:05:30.210 ************************************ 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:30.210 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.210 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.210 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.210 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.210 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.210 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:30.210 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.210 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.210 { 00:05:30.210 "name": "Malloc0", 00:05:30.210 "aliases": [ 00:05:30.210 "17b84c60-8ad7-4fee-bbad-0b0145e056c3" 00:05:30.210 ], 00:05:30.210 "product_name": "Malloc disk", 00:05:30.210 "block_size": 512, 00:05:30.210 "num_blocks": 16384, 00:05:30.210 "uuid": "17b84c60-8ad7-4fee-bbad-0b0145e056c3", 00:05:30.210 "assigned_rate_limits": { 00:05:30.210 "rw_ios_per_sec": 0, 00:05:30.210 "rw_mbytes_per_sec": 0, 00:05:30.210 "r_mbytes_per_sec": 0, 00:05:30.210 "w_mbytes_per_sec": 0 00:05:30.210 }, 00:05:30.210 "claimed": false, 00:05:30.210 "zoned": false, 00:05:30.210 "supported_io_types": { 00:05:30.210 "read": true, 00:05:30.210 "write": true, 00:05:30.210 "unmap": true, 00:05:30.210 "write_zeroes": true, 00:05:30.210 "flush": true, 00:05:30.210 "reset": true, 00:05:30.210 "compare": false, 00:05:30.210 "compare_and_write": false, 00:05:30.210 "abort": true, 00:05:30.210 "nvme_admin": false, 00:05:30.210 "nvme_io": false 00:05:30.210 }, 00:05:30.210 "memory_domains": [ 00:05:30.210 { 00:05:30.210 "dma_device_id": "system", 00:05:30.210 "dma_device_type": 1 00:05:30.210 }, 00:05:30.210 { 00:05:30.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.210 "dma_device_type": 2 00:05:30.210 } 00:05:30.210 ], 00:05:30.210 "driver_specific": {} 00:05:30.210 } 00:05:30.210 ]' 00:05:30.210 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.210 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.210 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.210 [2024-06-10 09:54:19.620672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:30.210 [2024-06-10 09:54:19.620762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.210 [2024-06-10 09:54:19.620803] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:30.210 [2024-06-10 09:54:19.620830] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.210 [2024-06-10 09:54:19.623474] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.210 [2024-06-10 09:54:19.623524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.210 Passthru0 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.210 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.210 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.210 { 00:05:30.210 "name": "Malloc0", 00:05:30.210 "aliases": [ 00:05:30.210 "17b84c60-8ad7-4fee-bbad-0b0145e056c3" 00:05:30.210 ], 00:05:30.210 "product_name": "Malloc disk", 00:05:30.210 "block_size": 512, 00:05:30.210 "num_blocks": 16384, 00:05:30.210 "uuid": "17b84c60-8ad7-4fee-bbad-0b0145e056c3", 00:05:30.210 "assigned_rate_limits": { 00:05:30.210 "rw_ios_per_sec": 0, 00:05:30.210 "rw_mbytes_per_sec": 0, 00:05:30.210 "r_mbytes_per_sec": 0, 00:05:30.210 "w_mbytes_per_sec": 0 00:05:30.210 }, 00:05:30.210 "claimed": true, 00:05:30.210 "claim_type": "exclusive_write", 00:05:30.210 "zoned": false, 00:05:30.210 "supported_io_types": { 00:05:30.210 "read": true, 00:05:30.210 "write": true, 00:05:30.210 "unmap": true, 00:05:30.210 "write_zeroes": true, 00:05:30.210 "flush": true, 00:05:30.210 "reset": true, 00:05:30.210 "compare": false, 00:05:30.210 "compare_and_write": false, 00:05:30.210 "abort": true, 00:05:30.210 "nvme_admin": false, 00:05:30.210 "nvme_io": false 00:05:30.210 }, 00:05:30.210 "memory_domains": [ 00:05:30.210 { 00:05:30.210 "dma_device_id": "system", 00:05:30.210 "dma_device_type": 1 00:05:30.210 }, 00:05:30.210 { 00:05:30.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.210 "dma_device_type": 2 00:05:30.210 } 00:05:30.210 ], 00:05:30.210 "driver_specific": {} 00:05:30.210 }, 00:05:30.210 { 00:05:30.210 "name": "Passthru0", 00:05:30.210 "aliases": [ 00:05:30.210 "e389103a-0a59-5f5c-9514-434eb8e06a46" 00:05:30.210 ], 00:05:30.210 "product_name": "passthru", 00:05:30.210 "block_size": 512, 00:05:30.210 "num_blocks": 16384, 00:05:30.210 "uuid": "e389103a-0a59-5f5c-9514-434eb8e06a46", 00:05:30.210 "assigned_rate_limits": { 00:05:30.210 "rw_ios_per_sec": 0, 00:05:30.210 "rw_mbytes_per_sec": 0, 00:05:30.210 "r_mbytes_per_sec": 0, 00:05:30.210 "w_mbytes_per_sec": 0 00:05:30.210 }, 00:05:30.210 "claimed": false, 00:05:30.210 "zoned": false, 00:05:30.210 "supported_io_types": { 00:05:30.210 "read": true, 00:05:30.210 "write": true, 00:05:30.210 "unmap": true, 00:05:30.210 "write_zeroes": true, 00:05:30.210 "flush": true, 00:05:30.210 "reset": true, 00:05:30.210 "compare": false, 00:05:30.210 "compare_and_write": false, 00:05:30.210 "abort": true, 00:05:30.210 "nvme_admin": false, 00:05:30.210 "nvme_io": false 00:05:30.210 }, 00:05:30.210 "memory_domains": [ 00:05:30.210 { 00:05:30.210 "dma_device_id": "system", 00:05:30.210 "dma_device_type": 1 00:05:30.210 }, 00:05:30.210 { 00:05:30.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.210 "dma_device_type": 2 00:05:30.210 } 00:05:30.210 ], 00:05:30.210 "driver_specific": { 00:05:30.210 "passthru": { 00:05:30.210 "name": "Passthru0", 00:05:30.210 "base_bdev_name": "Malloc0" 00:05:30.210 } 00:05:30.210 } 00:05:30.210 } 00:05:30.210 ]' 00:05:30.210 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:30.210 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.210 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.210 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.210 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.469 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.469 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.469 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.469 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.469 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.469 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.469 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:30.469 09:54:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.469 00:05:30.469 real 0m0.339s 00:05:30.469 user 0m0.206s 00:05:30.469 sys 0m0.037s 00:05:30.469 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:30.469 09:54:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.469 ************************************ 00:05:30.469 END TEST rpc_integrity 00:05:30.469 ************************************ 00:05:30.469 09:54:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:30.469 09:54:19 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:30.469 09:54:19 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:30.469 09:54:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.469 ************************************ 00:05:30.469 START TEST rpc_plugins 00:05:30.469 ************************************ 00:05:30.469 09:54:19 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:05:30.469 09:54:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:30.469 09:54:19 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.469 09:54:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.469 09:54:19 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.469 09:54:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:30.469 09:54:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:30.469 09:54:19 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.469 09:54:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.469 09:54:19 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.469 09:54:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:30.469 { 00:05:30.469 "name": "Malloc1", 00:05:30.469 "aliases": [ 00:05:30.469 "78a81b06-ae88-4eb9-89b5-65fcb6e015ed" 00:05:30.469 ], 00:05:30.469 "product_name": "Malloc disk", 00:05:30.469 "block_size": 4096, 00:05:30.469 "num_blocks": 256, 00:05:30.469 "uuid": "78a81b06-ae88-4eb9-89b5-65fcb6e015ed", 00:05:30.469 "assigned_rate_limits": { 00:05:30.469 "rw_ios_per_sec": 0, 00:05:30.469 "rw_mbytes_per_sec": 0, 00:05:30.469 "r_mbytes_per_sec": 0, 00:05:30.469 "w_mbytes_per_sec": 0 00:05:30.469 }, 00:05:30.469 "claimed": false, 00:05:30.469 "zoned": false, 00:05:30.469 "supported_io_types": { 00:05:30.469 "read": true, 00:05:30.469 "write": true, 00:05:30.469 "unmap": true, 00:05:30.469 "write_zeroes": true, 00:05:30.469 "flush": true, 00:05:30.469 "reset": true, 00:05:30.469 "compare": false, 00:05:30.469 "compare_and_write": false, 00:05:30.469 "abort": true, 00:05:30.469 "nvme_admin": false, 00:05:30.469 "nvme_io": false 00:05:30.469 }, 00:05:30.469 "memory_domains": [ 00:05:30.469 { 00:05:30.469 "dma_device_id": "system", 00:05:30.469 "dma_device_type": 1 00:05:30.469 }, 00:05:30.469 { 00:05:30.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.469 "dma_device_type": 2 00:05:30.469 } 00:05:30.469 ], 00:05:30.469 "driver_specific": {} 00:05:30.469 } 00:05:30.469 ]' 00:05:30.469 09:54:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:30.469 09:54:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:30.469 09:54:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:30.469 09:54:19 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.469 09:54:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.469 09:54:19 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.469 09:54:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:30.469 09:54:19 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.469 09:54:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.469 09:54:19 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.469 09:54:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:30.469 09:54:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:30.728 09:54:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:30.728 00:05:30.728 real 0m0.149s 00:05:30.728 user 0m0.098s 00:05:30.728 sys 0m0.015s 00:05:30.728 09:54:20 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:30.728 09:54:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.728 ************************************ 00:05:30.728 END TEST rpc_plugins 00:05:30.728 ************************************ 00:05:30.728 09:54:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:30.728 09:54:20 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:30.728 09:54:20 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:30.728 09:54:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.728 ************************************ 00:05:30.728 START TEST rpc_trace_cmd_test 00:05:30.728 ************************************ 00:05:30.728 09:54:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:05:30.728 09:54:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:30.728 09:54:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:30.728 09:54:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.728 09:54:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.728 09:54:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.728 09:54:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:30.728 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62017", 00:05:30.728 "tpoint_group_mask": "0x8", 00:05:30.728 "iscsi_conn": { 00:05:30.728 "mask": "0x2", 00:05:30.728 "tpoint_mask": "0x0" 00:05:30.728 }, 00:05:30.728 "scsi": { 00:05:30.729 "mask": "0x4", 00:05:30.729 "tpoint_mask": "0x0" 00:05:30.729 }, 00:05:30.729 "bdev": { 00:05:30.729 "mask": "0x8", 00:05:30.729 "tpoint_mask": "0xffffffffffffffff" 00:05:30.729 }, 00:05:30.729 "nvmf_rdma": { 00:05:30.729 "mask": "0x10", 00:05:30.729 "tpoint_mask": "0x0" 00:05:30.729 }, 00:05:30.729 "nvmf_tcp": { 00:05:30.729 "mask": "0x20", 00:05:30.729 "tpoint_mask": "0x0" 00:05:30.729 }, 00:05:30.729 "ftl": { 00:05:30.729 "mask": "0x40", 00:05:30.729 "tpoint_mask": "0x0" 00:05:30.729 }, 00:05:30.729 "blobfs": { 00:05:30.729 "mask": "0x80", 00:05:30.729 "tpoint_mask": "0x0" 00:05:30.729 }, 00:05:30.729 "dsa": { 00:05:30.729 "mask": "0x200", 00:05:30.729 "tpoint_mask": "0x0" 00:05:30.729 }, 00:05:30.729 "thread": { 00:05:30.729 "mask": "0x400", 00:05:30.729 "tpoint_mask": "0x0" 00:05:30.729 }, 00:05:30.729 "nvme_pcie": { 00:05:30.729 "mask": "0x800", 00:05:30.729 "tpoint_mask": "0x0" 00:05:30.729 }, 00:05:30.729 "iaa": { 00:05:30.729 "mask": "0x1000", 00:05:30.729 "tpoint_mask": "0x0" 00:05:30.729 }, 00:05:30.729 "nvme_tcp": { 00:05:30.729 "mask": "0x2000", 00:05:30.729 "tpoint_mask": "0x0" 00:05:30.729 }, 00:05:30.729 "bdev_nvme": { 00:05:30.729 "mask": "0x4000", 00:05:30.729 "tpoint_mask": "0x0" 00:05:30.729 }, 00:05:30.729 "sock": { 00:05:30.729 "mask": "0x8000", 00:05:30.729 "tpoint_mask": "0x0" 00:05:30.729 } 00:05:30.729 }' 00:05:30.729 09:54:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:30.729 09:54:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:30.729 09:54:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:30.729 09:54:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:30.729 09:54:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:30.729 09:54:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:30.729 09:54:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:30.987 09:54:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:30.987 09:54:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:30.987 09:54:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:30.987 00:05:30.987 real 0m0.253s 00:05:30.987 user 0m0.220s 00:05:30.987 sys 0m0.025s 00:05:30.988 09:54:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:30.988 09:54:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.988 ************************************ 00:05:30.988 END TEST rpc_trace_cmd_test 00:05:30.988 ************************************ 00:05:30.988 09:54:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:30.988 09:54:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:30.988 09:54:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:30.988 09:54:20 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:30.988 09:54:20 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:30.988 09:54:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.988 ************************************ 00:05:30.988 START TEST rpc_daemon_integrity 00:05:30.988 ************************************ 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.988 { 00:05:30.988 "name": "Malloc2", 00:05:30.988 "aliases": [ 00:05:30.988 "417b17c1-4cdf-49e6-a593-8ab6bbed7194" 00:05:30.988 ], 00:05:30.988 "product_name": "Malloc disk", 00:05:30.988 "block_size": 512, 00:05:30.988 "num_blocks": 16384, 00:05:30.988 "uuid": "417b17c1-4cdf-49e6-a593-8ab6bbed7194", 00:05:30.988 "assigned_rate_limits": { 00:05:30.988 "rw_ios_per_sec": 0, 00:05:30.988 "rw_mbytes_per_sec": 0, 00:05:30.988 "r_mbytes_per_sec": 0, 00:05:30.988 "w_mbytes_per_sec": 0 00:05:30.988 }, 00:05:30.988 "claimed": false, 00:05:30.988 "zoned": false, 00:05:30.988 "supported_io_types": { 00:05:30.988 "read": true, 00:05:30.988 "write": true, 00:05:30.988 "unmap": true, 00:05:30.988 "write_zeroes": true, 00:05:30.988 "flush": true, 00:05:30.988 "reset": true, 00:05:30.988 "compare": false, 00:05:30.988 "compare_and_write": false, 00:05:30.988 "abort": true, 00:05:30.988 "nvme_admin": false, 00:05:30.988 "nvme_io": false 00:05:30.988 }, 00:05:30.988 "memory_domains": [ 00:05:30.988 { 00:05:30.988 "dma_device_id": "system", 00:05:30.988 "dma_device_type": 1 00:05:30.988 }, 00:05:30.988 { 00:05:30.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.988 "dma_device_type": 2 00:05:30.988 } 00:05:30.988 ], 00:05:30.988 "driver_specific": {} 00:05:30.988 } 00:05:30.988 ]' 00:05:30.988 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.247 [2024-06-10 09:54:20.515425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:31.247 [2024-06-10 09:54:20.515503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.247 [2024-06-10 09:54:20.515537] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:31.247 [2024-06-10 09:54:20.515554] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.247 [2024-06-10 09:54:20.518281] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.247 [2024-06-10 09:54:20.518338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.247 Passthru0 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.247 { 00:05:31.247 "name": "Malloc2", 00:05:31.247 "aliases": [ 00:05:31.247 "417b17c1-4cdf-49e6-a593-8ab6bbed7194" 00:05:31.247 ], 00:05:31.247 "product_name": "Malloc disk", 00:05:31.247 "block_size": 512, 00:05:31.247 "num_blocks": 16384, 00:05:31.247 "uuid": "417b17c1-4cdf-49e6-a593-8ab6bbed7194", 00:05:31.247 "assigned_rate_limits": { 00:05:31.247 "rw_ios_per_sec": 0, 00:05:31.247 "rw_mbytes_per_sec": 0, 00:05:31.247 "r_mbytes_per_sec": 0, 00:05:31.247 "w_mbytes_per_sec": 0 00:05:31.247 }, 00:05:31.247 "claimed": true, 00:05:31.247 "claim_type": "exclusive_write", 00:05:31.247 "zoned": false, 00:05:31.247 "supported_io_types": { 00:05:31.247 "read": true, 00:05:31.247 "write": true, 00:05:31.247 "unmap": true, 00:05:31.247 "write_zeroes": true, 00:05:31.247 "flush": true, 00:05:31.247 "reset": true, 00:05:31.247 "compare": false, 00:05:31.247 "compare_and_write": false, 00:05:31.247 "abort": true, 00:05:31.247 "nvme_admin": false, 00:05:31.247 "nvme_io": false 00:05:31.247 }, 00:05:31.247 "memory_domains": [ 00:05:31.247 { 00:05:31.247 "dma_device_id": "system", 00:05:31.247 "dma_device_type": 1 00:05:31.247 }, 00:05:31.247 { 00:05:31.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.247 "dma_device_type": 2 00:05:31.247 } 00:05:31.247 ], 00:05:31.247 "driver_specific": {} 00:05:31.247 }, 00:05:31.247 { 00:05:31.247 "name": "Passthru0", 00:05:31.247 "aliases": [ 00:05:31.247 "7b37124a-2c40-5455-b298-427b618e4eaa" 00:05:31.247 ], 00:05:31.247 "product_name": "passthru", 00:05:31.247 "block_size": 512, 00:05:31.247 "num_blocks": 16384, 00:05:31.247 "uuid": "7b37124a-2c40-5455-b298-427b618e4eaa", 00:05:31.247 "assigned_rate_limits": { 00:05:31.247 "rw_ios_per_sec": 0, 00:05:31.247 "rw_mbytes_per_sec": 0, 00:05:31.247 "r_mbytes_per_sec": 0, 00:05:31.247 "w_mbytes_per_sec": 0 00:05:31.247 }, 00:05:31.247 "claimed": false, 00:05:31.247 "zoned": false, 00:05:31.247 "supported_io_types": { 00:05:31.247 "read": true, 00:05:31.247 "write": true, 00:05:31.247 "unmap": true, 00:05:31.247 "write_zeroes": true, 00:05:31.247 "flush": true, 00:05:31.247 "reset": true, 00:05:31.247 "compare": false, 00:05:31.247 "compare_and_write": false, 00:05:31.247 "abort": true, 00:05:31.247 "nvme_admin": false, 00:05:31.247 "nvme_io": false 00:05:31.247 }, 00:05:31.247 "memory_domains": [ 00:05:31.247 { 00:05:31.247 "dma_device_id": "system", 00:05:31.247 "dma_device_type": 1 00:05:31.247 }, 00:05:31.247 { 00:05:31.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.247 "dma_device_type": 2 00:05:31.247 } 00:05:31.247 ], 00:05:31.247 "driver_specific": { 00:05:31.247 "passthru": { 00:05:31.247 "name": "Passthru0", 00:05:31.247 "base_bdev_name": "Malloc2" 00:05:31.247 } 00:05:31.247 } 00:05:31.247 } 00:05:31.247 ]' 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.247 00:05:31.247 real 0m0.357s 00:05:31.247 user 0m0.227s 00:05:31.247 sys 0m0.036s 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:31.247 09:54:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.247 ************************************ 00:05:31.247 END TEST rpc_daemon_integrity 00:05:31.247 ************************************ 00:05:31.247 09:54:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:31.247 09:54:20 rpc -- rpc/rpc.sh@84 -- # killprocess 62017 00:05:31.247 09:54:20 rpc -- common/autotest_common.sh@949 -- # '[' -z 62017 ']' 00:05:31.247 09:54:20 rpc -- common/autotest_common.sh@953 -- # kill -0 62017 00:05:31.247 09:54:20 rpc -- common/autotest_common.sh@954 -- # uname 00:05:31.247 09:54:20 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:31.247 09:54:20 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 62017 00:05:31.505 09:54:20 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:31.505 09:54:20 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:31.505 09:54:20 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 62017' 00:05:31.505 killing process with pid 62017 00:05:31.505 09:54:20 rpc -- common/autotest_common.sh@968 -- # kill 62017 00:05:31.505 09:54:20 rpc -- common/autotest_common.sh@973 -- # wait 62017 00:05:33.409 00:05:33.409 real 0m4.772s 00:05:33.409 user 0m5.548s 00:05:33.409 sys 0m0.681s 00:05:33.409 09:54:22 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:33.409 ************************************ 00:05:33.409 END TEST rpc 00:05:33.409 ************************************ 00:05:33.409 09:54:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.409 09:54:22 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:33.409 09:54:22 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:33.409 09:54:22 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:33.409 09:54:22 -- common/autotest_common.sh@10 -- # set +x 00:05:33.409 ************************************ 00:05:33.409 START TEST skip_rpc 00:05:33.409 ************************************ 00:05:33.409 09:54:22 skip_rpc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:33.668 * Looking for test storage... 00:05:33.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:33.668 09:54:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:33.668 09:54:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:33.668 09:54:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:33.668 09:54:22 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:33.668 09:54:22 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:33.668 09:54:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.668 ************************************ 00:05:33.668 START TEST skip_rpc 00:05:33.668 ************************************ 00:05:33.668 09:54:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:05:33.668 09:54:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62232 00:05:33.668 09:54:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.668 09:54:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:33.668 09:54:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:33.668 [2024-06-10 09:54:23.105465] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:33.668 [2024-06-10 09:54:23.105677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62232 ] 00:05:33.927 [2024-06-10 09:54:23.280251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.193 [2024-06-10 09:54:23.525052] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.482 09:54:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:39.482 09:54:27 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:39.482 09:54:27 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:39.482 09:54:27 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:39.482 09:54:27 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:39.482 09:54:27 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:39.482 09:54:27 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:39.482 09:54:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:05:39.482 09:54:27 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:39.482 09:54:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.482 09:54:27 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:39.482 09:54:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:39.482 09:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:39.482 09:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:39.482 09:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:39.482 09:54:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:39.482 09:54:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62232 00:05:39.482 09:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 62232 ']' 00:05:39.482 09:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 62232 00:05:39.482 09:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:05:39.483 09:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:39.483 09:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 62232 00:05:39.483 09:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:39.483 09:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:39.483 09:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 62232' 00:05:39.483 killing process with pid 62232 00:05:39.483 09:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 62232 00:05:39.483 09:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 62232 00:05:40.859 00:05:40.859 real 0m7.149s 00:05:40.859 user 0m6.730s 00:05:40.859 sys 0m0.313s 00:05:40.859 09:54:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:40.860 09:54:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.860 ************************************ 00:05:40.860 END TEST skip_rpc 00:05:40.860 ************************************ 00:05:40.860 09:54:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:40.860 09:54:30 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:40.860 09:54:30 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:40.860 09:54:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.860 ************************************ 00:05:40.860 START TEST skip_rpc_with_json 00:05:40.860 ************************************ 00:05:40.860 09:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:05:40.860 09:54:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:40.860 09:54:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62336 00:05:40.860 09:54:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.860 09:54:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.860 09:54:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62336 00:05:40.860 09:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 62336 ']' 00:05:40.860 09:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.860 09:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:40.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.860 09:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.860 09:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:40.860 09:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.860 [2024-06-10 09:54:30.309991] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:40.860 [2024-06-10 09:54:30.310240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62336 ] 00:05:41.118 [2024-06-10 09:54:30.482004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.376 [2024-06-10 09:54:30.716070] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.942 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:41.942 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:05:41.942 09:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:41.942 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:41.942 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.942 [2024-06-10 09:54:31.457301] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:42.200 request: 00:05:42.200 { 00:05:42.200 "trtype": "tcp", 00:05:42.200 "method": "nvmf_get_transports", 00:05:42.200 "req_id": 1 00:05:42.200 } 00:05:42.200 Got JSON-RPC error response 00:05:42.200 response: 00:05:42.200 { 00:05:42.200 "code": -19, 00:05:42.200 "message": "No such device" 00:05:42.200 } 00:05:42.200 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:42.200 09:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:42.200 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:42.200 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.200 [2024-06-10 09:54:31.469433] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.200 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:42.200 09:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:42.200 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:42.200 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.200 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:42.200 09:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:42.200 { 00:05:42.200 "subsystems": [ 00:05:42.200 { 00:05:42.200 "subsystem": "keyring", 00:05:42.200 "config": [] 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "subsystem": "iobuf", 00:05:42.200 "config": [ 00:05:42.200 { 00:05:42.200 "method": "iobuf_set_options", 00:05:42.200 "params": { 00:05:42.200 "small_pool_count": 8192, 00:05:42.200 "large_pool_count": 1024, 00:05:42.200 "small_bufsize": 8192, 00:05:42.200 "large_bufsize": 135168 00:05:42.200 } 00:05:42.200 } 00:05:42.200 ] 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "subsystem": "sock", 00:05:42.200 "config": [ 00:05:42.200 { 00:05:42.200 "method": "sock_set_default_impl", 00:05:42.200 "params": { 00:05:42.200 "impl_name": "posix" 00:05:42.200 } 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "method": "sock_impl_set_options", 00:05:42.200 "params": { 00:05:42.200 "impl_name": "ssl", 00:05:42.200 "recv_buf_size": 4096, 00:05:42.200 "send_buf_size": 4096, 00:05:42.200 "enable_recv_pipe": true, 00:05:42.200 "enable_quickack": false, 00:05:42.200 "enable_placement_id": 0, 00:05:42.200 "enable_zerocopy_send_server": true, 00:05:42.200 "enable_zerocopy_send_client": false, 00:05:42.200 "zerocopy_threshold": 0, 00:05:42.200 "tls_version": 0, 00:05:42.200 "enable_ktls": false 00:05:42.200 } 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "method": "sock_impl_set_options", 00:05:42.200 "params": { 00:05:42.200 "impl_name": "posix", 00:05:42.200 "recv_buf_size": 2097152, 00:05:42.200 "send_buf_size": 2097152, 00:05:42.200 "enable_recv_pipe": true, 00:05:42.200 "enable_quickack": false, 00:05:42.200 "enable_placement_id": 0, 00:05:42.200 "enable_zerocopy_send_server": true, 00:05:42.200 "enable_zerocopy_send_client": false, 00:05:42.200 "zerocopy_threshold": 0, 00:05:42.200 "tls_version": 0, 00:05:42.200 "enable_ktls": false 00:05:42.200 } 00:05:42.200 } 00:05:42.200 ] 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "subsystem": "vmd", 00:05:42.200 "config": [] 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "subsystem": "accel", 00:05:42.200 "config": [ 00:05:42.200 { 00:05:42.200 "method": "accel_set_options", 00:05:42.200 "params": { 00:05:42.200 "small_cache_size": 128, 00:05:42.200 "large_cache_size": 16, 00:05:42.200 "task_count": 2048, 00:05:42.200 "sequence_count": 2048, 00:05:42.200 "buf_count": 2048 00:05:42.200 } 00:05:42.200 } 00:05:42.200 ] 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "subsystem": "bdev", 00:05:42.200 "config": [ 00:05:42.200 { 00:05:42.200 "method": "bdev_set_options", 00:05:42.200 "params": { 00:05:42.200 "bdev_io_pool_size": 65535, 00:05:42.200 "bdev_io_cache_size": 256, 00:05:42.200 "bdev_auto_examine": true, 00:05:42.200 "iobuf_small_cache_size": 128, 00:05:42.200 "iobuf_large_cache_size": 16 00:05:42.200 } 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "method": "bdev_raid_set_options", 00:05:42.200 "params": { 00:05:42.200 "process_window_size_kb": 1024 00:05:42.200 } 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "method": "bdev_iscsi_set_options", 00:05:42.200 "params": { 00:05:42.200 "timeout_sec": 30 00:05:42.200 } 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "method": "bdev_nvme_set_options", 00:05:42.200 "params": { 00:05:42.200 "action_on_timeout": "none", 00:05:42.200 "timeout_us": 0, 00:05:42.200 "timeout_admin_us": 0, 00:05:42.200 "keep_alive_timeout_ms": 10000, 00:05:42.200 "arbitration_burst": 0, 00:05:42.200 "low_priority_weight": 0, 00:05:42.200 "medium_priority_weight": 0, 00:05:42.200 "high_priority_weight": 0, 00:05:42.200 "nvme_adminq_poll_period_us": 10000, 00:05:42.200 "nvme_ioq_poll_period_us": 0, 00:05:42.200 "io_queue_requests": 0, 00:05:42.200 "delay_cmd_submit": true, 00:05:42.200 "transport_retry_count": 4, 00:05:42.200 "bdev_retry_count": 3, 00:05:42.200 "transport_ack_timeout": 0, 00:05:42.200 "ctrlr_loss_timeout_sec": 0, 00:05:42.200 "reconnect_delay_sec": 0, 00:05:42.200 "fast_io_fail_timeout_sec": 0, 00:05:42.200 "disable_auto_failback": false, 00:05:42.200 "generate_uuids": false, 00:05:42.200 "transport_tos": 0, 00:05:42.200 "nvme_error_stat": false, 00:05:42.200 "rdma_srq_size": 0, 00:05:42.200 "io_path_stat": false, 00:05:42.200 "allow_accel_sequence": false, 00:05:42.200 "rdma_max_cq_size": 0, 00:05:42.200 "rdma_cm_event_timeout_ms": 0, 00:05:42.200 "dhchap_digests": [ 00:05:42.200 "sha256", 00:05:42.200 "sha384", 00:05:42.200 "sha512" 00:05:42.200 ], 00:05:42.200 "dhchap_dhgroups": [ 00:05:42.200 "null", 00:05:42.200 "ffdhe2048", 00:05:42.200 "ffdhe3072", 00:05:42.200 "ffdhe4096", 00:05:42.200 "ffdhe6144", 00:05:42.200 "ffdhe8192" 00:05:42.200 ] 00:05:42.200 } 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "method": "bdev_nvme_set_hotplug", 00:05:42.200 "params": { 00:05:42.200 "period_us": 100000, 00:05:42.200 "enable": false 00:05:42.200 } 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "method": "bdev_wait_for_examine" 00:05:42.200 } 00:05:42.200 ] 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "subsystem": "scsi", 00:05:42.200 "config": null 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "subsystem": "scheduler", 00:05:42.200 "config": [ 00:05:42.200 { 00:05:42.200 "method": "framework_set_scheduler", 00:05:42.200 "params": { 00:05:42.200 "name": "static" 00:05:42.200 } 00:05:42.200 } 00:05:42.200 ] 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "subsystem": "vhost_scsi", 00:05:42.200 "config": [] 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "subsystem": "vhost_blk", 00:05:42.200 "config": [] 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "subsystem": "ublk", 00:05:42.200 "config": [] 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "subsystem": "nbd", 00:05:42.200 "config": [] 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "subsystem": "nvmf", 00:05:42.200 "config": [ 00:05:42.200 { 00:05:42.200 "method": "nvmf_set_config", 00:05:42.200 "params": { 00:05:42.200 "discovery_filter": "match_any", 00:05:42.200 "admin_cmd_passthru": { 00:05:42.200 "identify_ctrlr": false 00:05:42.200 } 00:05:42.200 } 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "method": "nvmf_set_max_subsystems", 00:05:42.200 "params": { 00:05:42.200 "max_subsystems": 1024 00:05:42.200 } 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "method": "nvmf_set_crdt", 00:05:42.200 "params": { 00:05:42.200 "crdt1": 0, 00:05:42.200 "crdt2": 0, 00:05:42.200 "crdt3": 0 00:05:42.200 } 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "method": "nvmf_create_transport", 00:05:42.200 "params": { 00:05:42.200 "trtype": "TCP", 00:05:42.200 "max_queue_depth": 128, 00:05:42.200 "max_io_qpairs_per_ctrlr": 127, 00:05:42.200 "in_capsule_data_size": 4096, 00:05:42.200 "max_io_size": 131072, 00:05:42.200 "io_unit_size": 131072, 00:05:42.200 "max_aq_depth": 128, 00:05:42.200 "num_shared_buffers": 511, 00:05:42.200 "buf_cache_size": 4294967295, 00:05:42.200 "dif_insert_or_strip": false, 00:05:42.200 "zcopy": false, 00:05:42.200 "c2h_success": true, 00:05:42.200 "sock_priority": 0, 00:05:42.200 "abort_timeout_sec": 1, 00:05:42.200 "ack_timeout": 0, 00:05:42.200 "data_wr_pool_size": 0 00:05:42.200 } 00:05:42.200 } 00:05:42.200 ] 00:05:42.200 }, 00:05:42.200 { 00:05:42.200 "subsystem": "iscsi", 00:05:42.200 "config": [ 00:05:42.200 { 00:05:42.200 "method": "iscsi_set_options", 00:05:42.200 "params": { 00:05:42.200 "node_base": "iqn.2016-06.io.spdk", 00:05:42.200 "max_sessions": 128, 00:05:42.200 "max_connections_per_session": 2, 00:05:42.200 "max_queue_depth": 64, 00:05:42.200 "default_time2wait": 2, 00:05:42.200 "default_time2retain": 20, 00:05:42.200 "first_burst_length": 8192, 00:05:42.200 "immediate_data": true, 00:05:42.200 "allow_duplicated_isid": false, 00:05:42.200 "error_recovery_level": 0, 00:05:42.200 "nop_timeout": 60, 00:05:42.200 "nop_in_interval": 30, 00:05:42.200 "disable_chap": false, 00:05:42.200 "require_chap": false, 00:05:42.200 "mutual_chap": false, 00:05:42.200 "chap_group": 0, 00:05:42.200 "max_large_datain_per_connection": 64, 00:05:42.200 "max_r2t_per_connection": 4, 00:05:42.200 "pdu_pool_size": 36864, 00:05:42.200 "immediate_data_pool_size": 16384, 00:05:42.200 "data_out_pool_size": 2048 00:05:42.200 } 00:05:42.200 } 00:05:42.200 ] 00:05:42.200 } 00:05:42.200 ] 00:05:42.200 } 00:05:42.200 09:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:42.200 09:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62336 00:05:42.200 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 62336 ']' 00:05:42.200 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 62336 00:05:42.200 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:42.200 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:42.200 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 62336 00:05:42.201 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:42.201 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:42.201 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 62336' 00:05:42.201 killing process with pid 62336 00:05:42.201 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 62336 00:05:42.201 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 62336 00:05:44.731 09:54:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62387 00:05:44.731 09:54:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:44.731 09:54:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:50.001 09:54:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62387 00:05:50.001 09:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 62387 ']' 00:05:50.001 09:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 62387 00:05:50.002 09:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:50.002 09:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:50.002 09:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 62387 00:05:50.002 09:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:50.002 killing process with pid 62387 00:05:50.002 09:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:50.002 09:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 62387' 00:05:50.002 09:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 62387 00:05:50.002 09:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 62387 00:05:51.907 09:54:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:51.907 09:54:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:51.907 00:05:51.907 real 0m10.773s 00:05:51.907 user 0m10.387s 00:05:51.907 sys 0m0.711s 00:05:51.907 09:54:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:51.907 09:54:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.907 ************************************ 00:05:51.907 END TEST skip_rpc_with_json 00:05:51.907 ************************************ 00:05:51.907 09:54:41 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:51.907 09:54:41 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:51.907 09:54:41 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:51.907 09:54:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.907 ************************************ 00:05:51.907 START TEST skip_rpc_with_delay 00:05:51.907 ************************************ 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.907 [2024-06-10 09:54:41.133055] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:51.907 [2024-06-10 09:54:41.133243] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:51.907 00:05:51.907 real 0m0.181s 00:05:51.907 user 0m0.096s 00:05:51.907 sys 0m0.083s 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:51.907 ************************************ 00:05:51.907 09:54:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:51.907 END TEST skip_rpc_with_delay 00:05:51.907 ************************************ 00:05:51.907 09:54:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:51.907 09:54:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:51.907 09:54:41 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:51.907 09:54:41 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:51.907 09:54:41 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:51.907 09:54:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.907 ************************************ 00:05:51.907 START TEST exit_on_failed_rpc_init 00:05:51.907 ************************************ 00:05:51.907 09:54:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:05:51.907 09:54:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62515 00:05:51.907 09:54:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.907 09:54:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62515 00:05:51.907 09:54:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 62515 ']' 00:05:51.907 09:54:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.907 09:54:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:51.907 09:54:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.907 09:54:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:51.907 09:54:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:51.907 [2024-06-10 09:54:41.344782] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:51.908 [2024-06-10 09:54:41.344933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62515 ] 00:05:52.166 [2024-06-10 09:54:41.509275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.446 [2024-06-10 09:54:41.738267] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.022 09:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:53.022 09:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:05:53.022 09:54:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.022 09:54:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:53.022 09:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:05:53.022 09:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:53.022 09:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.022 09:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:53.022 09:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.022 09:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:53.022 09:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.022 09:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:53.022 09:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.022 09:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:53.022 09:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:53.282 [2024-06-10 09:54:42.582823] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:53.282 [2024-06-10 09:54:42.582989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62543 ] 00:05:53.282 [2024-06-10 09:54:42.754662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.541 [2024-06-10 09:54:42.978400] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.541 [2024-06-10 09:54:42.978517] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:53.541 [2024-06-10 09:54:42.978539] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:53.541 [2024-06-10 09:54:42.978567] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62515 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 62515 ']' 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 62515 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 62515 00:05:54.109 killing process with pid 62515 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 62515' 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 62515 00:05:54.109 09:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 62515 00:05:56.013 00:05:56.013 real 0m4.179s 00:05:56.013 user 0m5.004s 00:05:56.013 sys 0m0.497s 00:05:56.013 ************************************ 00:05:56.013 END TEST exit_on_failed_rpc_init 00:05:56.013 ************************************ 00:05:56.013 09:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:56.013 09:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.013 09:54:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:56.013 00:05:56.013 real 0m22.577s 00:05:56.013 user 0m22.311s 00:05:56.013 sys 0m1.791s 00:05:56.013 ************************************ 00:05:56.013 END TEST skip_rpc 00:05:56.013 ************************************ 00:05:56.013 09:54:45 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:56.013 09:54:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.013 09:54:45 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:56.013 09:54:45 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:56.013 09:54:45 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:56.013 09:54:45 -- common/autotest_common.sh@10 -- # set +x 00:05:56.013 ************************************ 00:05:56.013 START TEST rpc_client 00:05:56.013 ************************************ 00:05:56.013 09:54:45 rpc_client -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:56.271 * Looking for test storage... 00:05:56.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:56.271 09:54:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:56.271 OK 00:05:56.271 09:54:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:56.271 00:05:56.271 real 0m0.152s 00:05:56.271 user 0m0.058s 00:05:56.271 sys 0m0.098s 00:05:56.271 09:54:45 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:56.272 09:54:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:56.272 ************************************ 00:05:56.272 END TEST rpc_client 00:05:56.272 ************************************ 00:05:56.272 09:54:45 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:56.272 09:54:45 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:56.272 09:54:45 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:56.272 09:54:45 -- common/autotest_common.sh@10 -- # set +x 00:05:56.272 ************************************ 00:05:56.272 START TEST json_config 00:05:56.272 ************************************ 00:05:56.272 09:54:45 json_config -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:56.531 09:54:45 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:56.531 09:54:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:56.531 09:54:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97c1d2c7-f3c7-4dc5-9a74-d2f35dc4a034 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=97c1d2c7-f3c7-4dc5-9a74-d2f35dc4a034 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:56.532 09:54:45 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.532 09:54:45 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.532 09:54:45 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.532 09:54:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.532 09:54:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.532 09:54:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.532 09:54:45 json_config -- paths/export.sh@5 -- # export PATH 00:05:56.532 09:54:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@47 -- # : 0 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:56.532 09:54:45 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:56.532 09:54:45 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:56.532 09:54:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:56.532 09:54:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:56.532 09:54:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:56.532 09:54:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:56.532 WARNING: No tests are enabled so not running JSON configuration tests 00:05:56.532 09:54:45 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:56.532 09:54:45 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:56.532 ************************************ 00:05:56.532 END TEST json_config 00:05:56.532 ************************************ 00:05:56.532 00:05:56.532 real 0m0.082s 00:05:56.532 user 0m0.036s 00:05:56.532 sys 0m0.042s 00:05:56.532 09:54:45 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:56.532 09:54:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.532 09:54:45 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:56.532 09:54:45 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:56.532 09:54:45 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:56.532 09:54:45 -- common/autotest_common.sh@10 -- # set +x 00:05:56.532 ************************************ 00:05:56.532 START TEST json_config_extra_key 00:05:56.532 ************************************ 00:05:56.532 09:54:45 json_config_extra_key -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:56.532 09:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97c1d2c7-f3c7-4dc5-9a74-d2f35dc4a034 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=97c1d2c7-f3c7-4dc5-9a74-d2f35dc4a034 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:56.532 09:54:45 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.532 09:54:45 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.532 09:54:45 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.532 09:54:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.532 09:54:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.532 09:54:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.532 09:54:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:56.532 09:54:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:56.532 09:54:45 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:56.532 09:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:56.532 09:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:56.532 09:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:56.532 09:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:56.533 09:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:56.533 09:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:56.533 09:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:56.533 09:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:56.533 09:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:56.533 09:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:56.533 INFO: launching applications... 00:05:56.533 09:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:56.533 09:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:56.533 09:54:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:56.533 09:54:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:56.533 09:54:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:56.533 09:54:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:56.533 09:54:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:56.533 09:54:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.533 09:54:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.533 09:54:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=62719 00:05:56.533 Waiting for target to run... 00:05:56.533 09:54:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:56.533 09:54:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 62719 /var/tmp/spdk_tgt.sock 00:05:56.533 09:54:45 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 62719 ']' 00:05:56.533 09:54:45 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:56.533 09:54:45 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:56.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:56.533 09:54:45 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:56.533 09:54:45 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:56.533 09:54:45 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:56.533 09:54:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:56.791 [2024-06-10 09:54:46.111266] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:56.791 [2024-06-10 09:54:46.111449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62719 ] 00:05:57.050 [2024-06-10 09:54:46.446676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.309 [2024-06-10 09:54:46.659685] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.876 09:54:47 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:57.876 00:05:57.876 INFO: shutting down applications... 00:05:57.876 09:54:47 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:05:57.876 09:54:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:57.876 09:54:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:57.876 09:54:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:57.876 09:54:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:57.876 09:54:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:57.876 09:54:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 62719 ]] 00:05:57.876 09:54:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 62719 00:05:57.876 09:54:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:57.877 09:54:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.877 09:54:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62719 00:05:57.877 09:54:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:58.444 09:54:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:58.444 09:54:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.444 09:54:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62719 00:05:58.444 09:54:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.011 09:54:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.011 09:54:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.011 09:54:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62719 00:05:59.011 09:54:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.270 09:54:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.270 09:54:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.270 09:54:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62719 00:05:59.270 09:54:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.837 09:54:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.838 09:54:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.838 09:54:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62719 00:05:59.838 09:54:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:00.409 09:54:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:00.409 09:54:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.409 09:54:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62719 00:06:00.409 09:54:49 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:00.409 09:54:49 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:00.409 SPDK target shutdown done 00:06:00.409 09:54:49 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:00.409 09:54:49 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:00.409 Success 00:06:00.409 09:54:49 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:00.409 00:06:00.409 real 0m3.901s 00:06:00.409 user 0m3.560s 00:06:00.409 sys 0m0.495s 00:06:00.409 09:54:49 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:00.409 09:54:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:00.409 ************************************ 00:06:00.409 END TEST json_config_extra_key 00:06:00.409 ************************************ 00:06:00.409 09:54:49 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:00.409 09:54:49 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:00.409 09:54:49 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:00.409 09:54:49 -- common/autotest_common.sh@10 -- # set +x 00:06:00.409 ************************************ 00:06:00.409 START TEST alias_rpc 00:06:00.409 ************************************ 00:06:00.409 09:54:49 alias_rpc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:00.409 * Looking for test storage... 00:06:00.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:00.409 09:54:49 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:00.409 09:54:49 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62816 00:06:00.409 09:54:49 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.409 09:54:49 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62816 00:06:00.409 09:54:49 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 62816 ']' 00:06:00.409 09:54:49 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.409 09:54:49 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:00.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.409 09:54:49 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.409 09:54:49 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:00.409 09:54:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.668 [2024-06-10 09:54:50.031540] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:00.668 [2024-06-10 09:54:50.031732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62816 ] 00:06:00.925 [2024-06-10 09:54:50.207261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.184 [2024-06-10 09:54:50.448678] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.750 09:54:51 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:01.750 09:54:51 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:01.750 09:54:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:02.008 09:54:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62816 00:06:02.008 09:54:51 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 62816 ']' 00:06:02.008 09:54:51 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 62816 00:06:02.008 09:54:51 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:06:02.008 09:54:51 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:02.008 09:54:51 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 62816 00:06:02.008 09:54:51 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:02.008 09:54:51 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:02.008 killing process with pid 62816 00:06:02.008 09:54:51 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 62816' 00:06:02.008 09:54:51 alias_rpc -- common/autotest_common.sh@968 -- # kill 62816 00:06:02.008 09:54:51 alias_rpc -- common/autotest_common.sh@973 -- # wait 62816 00:06:04.573 ************************************ 00:06:04.573 END TEST alias_rpc 00:06:04.573 ************************************ 00:06:04.573 00:06:04.573 real 0m3.836s 00:06:04.573 user 0m4.056s 00:06:04.573 sys 0m0.461s 00:06:04.573 09:54:53 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:04.573 09:54:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.573 09:54:53 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:04.573 09:54:53 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:04.573 09:54:53 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:04.573 09:54:53 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:04.573 09:54:53 -- common/autotest_common.sh@10 -- # set +x 00:06:04.573 ************************************ 00:06:04.573 START TEST spdkcli_tcp 00:06:04.573 ************************************ 00:06:04.573 09:54:53 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:04.573 * Looking for test storage... 00:06:04.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:04.573 09:54:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:04.573 09:54:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:04.573 09:54:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:04.573 09:54:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:04.573 09:54:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:04.573 09:54:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:04.573 09:54:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:04.573 09:54:53 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:04.573 09:54:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.573 09:54:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=62910 00:06:04.573 09:54:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 62910 00:06:04.573 09:54:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:04.573 09:54:53 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 62910 ']' 00:06:04.573 09:54:53 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.573 09:54:53 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:04.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.573 09:54:53 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.573 09:54:53 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:04.573 09:54:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.573 [2024-06-10 09:54:53.904424] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:04.573 [2024-06-10 09:54:53.904596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62910 ] 00:06:04.573 [2024-06-10 09:54:54.076933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.832 [2024-06-10 09:54:54.311239] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.832 [2024-06-10 09:54:54.311239] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.766 09:54:55 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:05.766 09:54:55 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:06:05.766 09:54:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=62932 00:06:05.766 09:54:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:05.767 09:54:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:06.025 [ 00:06:06.025 "bdev_malloc_delete", 00:06:06.025 "bdev_malloc_create", 00:06:06.025 "bdev_null_resize", 00:06:06.025 "bdev_null_delete", 00:06:06.025 "bdev_null_create", 00:06:06.025 "bdev_nvme_cuse_unregister", 00:06:06.025 "bdev_nvme_cuse_register", 00:06:06.025 "bdev_opal_new_user", 00:06:06.025 "bdev_opal_set_lock_state", 00:06:06.025 "bdev_opal_delete", 00:06:06.025 "bdev_opal_get_info", 00:06:06.025 "bdev_opal_create", 00:06:06.025 "bdev_nvme_opal_revert", 00:06:06.025 "bdev_nvme_opal_init", 00:06:06.025 "bdev_nvme_send_cmd", 00:06:06.025 "bdev_nvme_get_path_iostat", 00:06:06.025 "bdev_nvme_get_mdns_discovery_info", 00:06:06.025 "bdev_nvme_stop_mdns_discovery", 00:06:06.025 "bdev_nvme_start_mdns_discovery", 00:06:06.025 "bdev_nvme_set_multipath_policy", 00:06:06.025 "bdev_nvme_set_preferred_path", 00:06:06.025 "bdev_nvme_get_io_paths", 00:06:06.025 "bdev_nvme_remove_error_injection", 00:06:06.025 "bdev_nvme_add_error_injection", 00:06:06.025 "bdev_nvme_get_discovery_info", 00:06:06.025 "bdev_nvme_stop_discovery", 00:06:06.025 "bdev_nvme_start_discovery", 00:06:06.025 "bdev_nvme_get_controller_health_info", 00:06:06.025 "bdev_nvme_disable_controller", 00:06:06.025 "bdev_nvme_enable_controller", 00:06:06.025 "bdev_nvme_reset_controller", 00:06:06.025 "bdev_nvme_get_transport_statistics", 00:06:06.025 "bdev_nvme_apply_firmware", 00:06:06.025 "bdev_nvme_detach_controller", 00:06:06.025 "bdev_nvme_get_controllers", 00:06:06.025 "bdev_nvme_attach_controller", 00:06:06.025 "bdev_nvme_set_hotplug", 00:06:06.025 "bdev_nvme_set_options", 00:06:06.025 "bdev_passthru_delete", 00:06:06.026 "bdev_passthru_create", 00:06:06.026 "bdev_lvol_set_parent_bdev", 00:06:06.026 "bdev_lvol_set_parent", 00:06:06.026 "bdev_lvol_check_shallow_copy", 00:06:06.026 "bdev_lvol_start_shallow_copy", 00:06:06.026 "bdev_lvol_grow_lvstore", 00:06:06.026 "bdev_lvol_get_lvols", 00:06:06.026 "bdev_lvol_get_lvstores", 00:06:06.026 "bdev_lvol_delete", 00:06:06.026 "bdev_lvol_set_read_only", 00:06:06.026 "bdev_lvol_resize", 00:06:06.026 "bdev_lvol_decouple_parent", 00:06:06.026 "bdev_lvol_inflate", 00:06:06.026 "bdev_lvol_rename", 00:06:06.026 "bdev_lvol_clone_bdev", 00:06:06.026 "bdev_lvol_clone", 00:06:06.026 "bdev_lvol_snapshot", 00:06:06.026 "bdev_lvol_create", 00:06:06.026 "bdev_lvol_delete_lvstore", 00:06:06.026 "bdev_lvol_rename_lvstore", 00:06:06.026 "bdev_lvol_create_lvstore", 00:06:06.026 "bdev_raid_set_options", 00:06:06.026 "bdev_raid_remove_base_bdev", 00:06:06.026 "bdev_raid_add_base_bdev", 00:06:06.026 "bdev_raid_delete", 00:06:06.026 "bdev_raid_create", 00:06:06.026 "bdev_raid_get_bdevs", 00:06:06.026 "bdev_error_inject_error", 00:06:06.026 "bdev_error_delete", 00:06:06.026 "bdev_error_create", 00:06:06.026 "bdev_split_delete", 00:06:06.026 "bdev_split_create", 00:06:06.026 "bdev_delay_delete", 00:06:06.026 "bdev_delay_create", 00:06:06.026 "bdev_delay_update_latency", 00:06:06.026 "bdev_zone_block_delete", 00:06:06.026 "bdev_zone_block_create", 00:06:06.026 "blobfs_create", 00:06:06.026 "blobfs_detect", 00:06:06.026 "blobfs_set_cache_size", 00:06:06.026 "bdev_xnvme_delete", 00:06:06.026 "bdev_xnvme_create", 00:06:06.026 "bdev_aio_delete", 00:06:06.026 "bdev_aio_rescan", 00:06:06.026 "bdev_aio_create", 00:06:06.026 "bdev_ftl_set_property", 00:06:06.026 "bdev_ftl_get_properties", 00:06:06.026 "bdev_ftl_get_stats", 00:06:06.026 "bdev_ftl_unmap", 00:06:06.026 "bdev_ftl_unload", 00:06:06.026 "bdev_ftl_delete", 00:06:06.026 "bdev_ftl_load", 00:06:06.026 "bdev_ftl_create", 00:06:06.026 "bdev_virtio_attach_controller", 00:06:06.026 "bdev_virtio_scsi_get_devices", 00:06:06.026 "bdev_virtio_detach_controller", 00:06:06.026 "bdev_virtio_blk_set_hotplug", 00:06:06.026 "bdev_iscsi_delete", 00:06:06.026 "bdev_iscsi_create", 00:06:06.026 "bdev_iscsi_set_options", 00:06:06.026 "accel_error_inject_error", 00:06:06.026 "ioat_scan_accel_module", 00:06:06.026 "dsa_scan_accel_module", 00:06:06.026 "iaa_scan_accel_module", 00:06:06.026 "keyring_file_remove_key", 00:06:06.026 "keyring_file_add_key", 00:06:06.026 "keyring_linux_set_options", 00:06:06.026 "iscsi_get_histogram", 00:06:06.026 "iscsi_enable_histogram", 00:06:06.026 "iscsi_set_options", 00:06:06.026 "iscsi_get_auth_groups", 00:06:06.026 "iscsi_auth_group_remove_secret", 00:06:06.026 "iscsi_auth_group_add_secret", 00:06:06.026 "iscsi_delete_auth_group", 00:06:06.026 "iscsi_create_auth_group", 00:06:06.026 "iscsi_set_discovery_auth", 00:06:06.026 "iscsi_get_options", 00:06:06.026 "iscsi_target_node_request_logout", 00:06:06.026 "iscsi_target_node_set_redirect", 00:06:06.026 "iscsi_target_node_set_auth", 00:06:06.026 "iscsi_target_node_add_lun", 00:06:06.026 "iscsi_get_stats", 00:06:06.026 "iscsi_get_connections", 00:06:06.026 "iscsi_portal_group_set_auth", 00:06:06.026 "iscsi_start_portal_group", 00:06:06.026 "iscsi_delete_portal_group", 00:06:06.026 "iscsi_create_portal_group", 00:06:06.026 "iscsi_get_portal_groups", 00:06:06.026 "iscsi_delete_target_node", 00:06:06.026 "iscsi_target_node_remove_pg_ig_maps", 00:06:06.026 "iscsi_target_node_add_pg_ig_maps", 00:06:06.026 "iscsi_create_target_node", 00:06:06.026 "iscsi_get_target_nodes", 00:06:06.026 "iscsi_delete_initiator_group", 00:06:06.026 "iscsi_initiator_group_remove_initiators", 00:06:06.026 "iscsi_initiator_group_add_initiators", 00:06:06.026 "iscsi_create_initiator_group", 00:06:06.026 "iscsi_get_initiator_groups", 00:06:06.026 "nvmf_set_crdt", 00:06:06.026 "nvmf_set_config", 00:06:06.026 "nvmf_set_max_subsystems", 00:06:06.026 "nvmf_stop_mdns_prr", 00:06:06.026 "nvmf_publish_mdns_prr", 00:06:06.026 "nvmf_subsystem_get_listeners", 00:06:06.026 "nvmf_subsystem_get_qpairs", 00:06:06.026 "nvmf_subsystem_get_controllers", 00:06:06.026 "nvmf_get_stats", 00:06:06.026 "nvmf_get_transports", 00:06:06.026 "nvmf_create_transport", 00:06:06.026 "nvmf_get_targets", 00:06:06.026 "nvmf_delete_target", 00:06:06.026 "nvmf_create_target", 00:06:06.026 "nvmf_subsystem_allow_any_host", 00:06:06.026 "nvmf_subsystem_remove_host", 00:06:06.026 "nvmf_subsystem_add_host", 00:06:06.026 "nvmf_ns_remove_host", 00:06:06.026 "nvmf_ns_add_host", 00:06:06.026 "nvmf_subsystem_remove_ns", 00:06:06.026 "nvmf_subsystem_add_ns", 00:06:06.026 "nvmf_subsystem_listener_set_ana_state", 00:06:06.026 "nvmf_discovery_get_referrals", 00:06:06.026 "nvmf_discovery_remove_referral", 00:06:06.026 "nvmf_discovery_add_referral", 00:06:06.026 "nvmf_subsystem_remove_listener", 00:06:06.026 "nvmf_subsystem_add_listener", 00:06:06.026 "nvmf_delete_subsystem", 00:06:06.026 "nvmf_create_subsystem", 00:06:06.026 "nvmf_get_subsystems", 00:06:06.026 "env_dpdk_get_mem_stats", 00:06:06.026 "nbd_get_disks", 00:06:06.026 "nbd_stop_disk", 00:06:06.026 "nbd_start_disk", 00:06:06.026 "ublk_recover_disk", 00:06:06.026 "ublk_get_disks", 00:06:06.026 "ublk_stop_disk", 00:06:06.026 "ublk_start_disk", 00:06:06.026 "ublk_destroy_target", 00:06:06.026 "ublk_create_target", 00:06:06.026 "virtio_blk_create_transport", 00:06:06.026 "virtio_blk_get_transports", 00:06:06.026 "vhost_controller_set_coalescing", 00:06:06.026 "vhost_get_controllers", 00:06:06.026 "vhost_delete_controller", 00:06:06.026 "vhost_create_blk_controller", 00:06:06.027 "vhost_scsi_controller_remove_target", 00:06:06.027 "vhost_scsi_controller_add_target", 00:06:06.027 "vhost_start_scsi_controller", 00:06:06.027 "vhost_create_scsi_controller", 00:06:06.027 "thread_set_cpumask", 00:06:06.027 "framework_get_scheduler", 00:06:06.027 "framework_set_scheduler", 00:06:06.027 "framework_get_reactors", 00:06:06.027 "thread_get_io_channels", 00:06:06.027 "thread_get_pollers", 00:06:06.027 "thread_get_stats", 00:06:06.027 "framework_monitor_context_switch", 00:06:06.027 "spdk_kill_instance", 00:06:06.027 "log_enable_timestamps", 00:06:06.027 "log_get_flags", 00:06:06.027 "log_clear_flag", 00:06:06.027 "log_set_flag", 00:06:06.027 "log_get_level", 00:06:06.027 "log_set_level", 00:06:06.027 "log_get_print_level", 00:06:06.027 "log_set_print_level", 00:06:06.027 "framework_enable_cpumask_locks", 00:06:06.027 "framework_disable_cpumask_locks", 00:06:06.027 "framework_wait_init", 00:06:06.027 "framework_start_init", 00:06:06.027 "scsi_get_devices", 00:06:06.027 "bdev_get_histogram", 00:06:06.027 "bdev_enable_histogram", 00:06:06.027 "bdev_set_qos_limit", 00:06:06.027 "bdev_set_qd_sampling_period", 00:06:06.027 "bdev_get_bdevs", 00:06:06.027 "bdev_reset_iostat", 00:06:06.027 "bdev_get_iostat", 00:06:06.027 "bdev_examine", 00:06:06.027 "bdev_wait_for_examine", 00:06:06.027 "bdev_set_options", 00:06:06.027 "notify_get_notifications", 00:06:06.027 "notify_get_types", 00:06:06.027 "accel_get_stats", 00:06:06.027 "accel_set_options", 00:06:06.027 "accel_set_driver", 00:06:06.027 "accel_crypto_key_destroy", 00:06:06.027 "accel_crypto_keys_get", 00:06:06.027 "accel_crypto_key_create", 00:06:06.027 "accel_assign_opc", 00:06:06.027 "accel_get_module_info", 00:06:06.027 "accel_get_opc_assignments", 00:06:06.027 "vmd_rescan", 00:06:06.027 "vmd_remove_device", 00:06:06.027 "vmd_enable", 00:06:06.027 "sock_get_default_impl", 00:06:06.027 "sock_set_default_impl", 00:06:06.027 "sock_impl_set_options", 00:06:06.027 "sock_impl_get_options", 00:06:06.027 "iobuf_get_stats", 00:06:06.027 "iobuf_set_options", 00:06:06.027 "framework_get_pci_devices", 00:06:06.027 "framework_get_config", 00:06:06.027 "framework_get_subsystems", 00:06:06.027 "trace_get_info", 00:06:06.027 "trace_get_tpoint_group_mask", 00:06:06.027 "trace_disable_tpoint_group", 00:06:06.027 "trace_enable_tpoint_group", 00:06:06.027 "trace_clear_tpoint_mask", 00:06:06.027 "trace_set_tpoint_mask", 00:06:06.027 "keyring_get_keys", 00:06:06.027 "spdk_get_version", 00:06:06.027 "rpc_get_methods" 00:06:06.027 ] 00:06:06.027 09:54:55 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:06.027 09:54:55 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:06.027 09:54:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.027 09:54:55 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:06.027 09:54:55 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 62910 00:06:06.027 09:54:55 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 62910 ']' 00:06:06.027 09:54:55 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 62910 00:06:06.027 09:54:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:06:06.027 09:54:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:06.027 09:54:55 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 62910 00:06:06.027 09:54:55 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:06.027 09:54:55 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:06.027 killing process with pid 62910 00:06:06.027 09:54:55 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 62910' 00:06:06.027 09:54:55 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 62910 00:06:06.027 09:54:55 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 62910 00:06:08.561 00:06:08.561 real 0m3.864s 00:06:08.561 user 0m6.870s 00:06:08.561 sys 0m0.512s 00:06:08.561 09:54:57 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:08.561 09:54:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.561 ************************************ 00:06:08.561 END TEST spdkcli_tcp 00:06:08.561 ************************************ 00:06:08.561 09:54:57 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:08.561 09:54:57 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:08.561 09:54:57 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:08.561 09:54:57 -- common/autotest_common.sh@10 -- # set +x 00:06:08.561 ************************************ 00:06:08.561 START TEST dpdk_mem_utility 00:06:08.561 ************************************ 00:06:08.561 09:54:57 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:08.561 * Looking for test storage... 00:06:08.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:08.561 09:54:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:08.561 09:54:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=63018 00:06:08.561 09:54:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 63018 00:06:08.561 09:54:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.561 09:54:57 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 63018 ']' 00:06:08.561 09:54:57 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.561 09:54:57 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:08.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.561 09:54:57 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.561 09:54:57 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:08.561 09:54:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.561 [2024-06-10 09:54:57.790766] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:08.561 [2024-06-10 09:54:57.790914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63018 ] 00:06:08.561 [2024-06-10 09:54:57.957145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.820 [2024-06-10 09:54:58.186258] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.757 09:54:58 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:09.757 09:54:58 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:06:09.757 09:54:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:09.757 09:54:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:09.757 09:54:58 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:09.757 09:54:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.757 { 00:06:09.757 "filename": "/tmp/spdk_mem_dump.txt" 00:06:09.757 } 00:06:09.757 09:54:58 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:09.757 09:54:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:09.757 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:09.757 1 heaps totaling size 820.000000 MiB 00:06:09.757 size: 820.000000 MiB heap id: 0 00:06:09.757 end heaps---------- 00:06:09.757 8 mempools totaling size 598.116089 MiB 00:06:09.757 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:09.757 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:09.757 size: 84.521057 MiB name: bdev_io_63018 00:06:09.757 size: 51.011292 MiB name: evtpool_63018 00:06:09.757 size: 50.003479 MiB name: msgpool_63018 00:06:09.757 size: 21.763794 MiB name: PDU_Pool 00:06:09.757 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:09.757 size: 0.026123 MiB name: Session_Pool 00:06:09.757 end mempools------- 00:06:09.757 6 memzones totaling size 4.142822 MiB 00:06:09.757 size: 1.000366 MiB name: RG_ring_0_63018 00:06:09.757 size: 1.000366 MiB name: RG_ring_1_63018 00:06:09.757 size: 1.000366 MiB name: RG_ring_4_63018 00:06:09.757 size: 1.000366 MiB name: RG_ring_5_63018 00:06:09.757 size: 0.125366 MiB name: RG_ring_2_63018 00:06:09.757 size: 0.015991 MiB name: RG_ring_3_63018 00:06:09.757 end memzones------- 00:06:09.757 09:54:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:09.757 heap id: 0 total size: 820.000000 MiB number of busy elements: 296 number of free elements: 18 00:06:09.757 list of free elements. size: 18.452515 MiB 00:06:09.757 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:09.757 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:09.757 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:09.757 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:09.757 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:09.757 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:09.757 element at address: 0x200019600000 with size: 0.999084 MiB 00:06:09.757 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:09.757 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:09.757 element at address: 0x200018e00000 with size: 0.959656 MiB 00:06:09.757 element at address: 0x200019900040 with size: 0.936401 MiB 00:06:09.757 element at address: 0x200000200000 with size: 0.830200 MiB 00:06:09.757 element at address: 0x20001b000000 with size: 0.565125 MiB 00:06:09.757 element at address: 0x200019200000 with size: 0.487976 MiB 00:06:09.757 element at address: 0x200019a00000 with size: 0.485413 MiB 00:06:09.757 element at address: 0x200013800000 with size: 0.467651 MiB 00:06:09.757 element at address: 0x200028400000 with size: 0.390442 MiB 00:06:09.757 element at address: 0x200003a00000 with size: 0.351990 MiB 00:06:09.757 list of standard malloc elements. size: 199.283081 MiB 00:06:09.757 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:09.757 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:09.757 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:09.758 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:09.758 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:09.758 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:09.758 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:09.758 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:09.758 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:06:09.758 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:06:09.758 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:06:09.758 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200013877b80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200013877c80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200013877d80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200013877e80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200013877f80 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200013878080 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200013878180 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200013878280 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200013878380 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200013878480 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200013878580 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x200019abc680 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:06:09.758 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:06:09.759 element at address: 0x200028463f40 with size: 0.000244 MiB 00:06:09.759 element at address: 0x200028464040 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846af80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846b080 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846b180 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846b280 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846b380 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846b480 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846b580 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846b680 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846b780 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846b880 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846b980 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846be80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846c080 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846c180 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846c280 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846c380 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846c480 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846c580 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846c680 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846c780 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846c880 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846c980 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846d080 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846d180 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846d280 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846d380 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846d480 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846d580 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846d680 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846d780 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846d880 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846d980 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846da80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846db80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846de80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846df80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846e080 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846e180 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846e280 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846e380 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846e480 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846e580 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846e680 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846e780 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846e880 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846e980 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846f080 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846f180 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846f280 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846f380 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846f480 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846f580 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846f680 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846f780 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846f880 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846f980 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:06:09.759 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:06:09.759 list of memzone associated elements. size: 602.264404 MiB 00:06:09.759 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:09.759 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:09.759 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:09.759 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:09.759 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:09.759 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_63018_0 00:06:09.759 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:09.759 associated memzone info: size: 48.002930 MiB name: MP_evtpool_63018_0 00:06:09.759 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:09.759 associated memzone info: size: 48.002930 MiB name: MP_msgpool_63018_0 00:06:09.759 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:09.759 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:09.759 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:09.759 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:09.760 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:09.760 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_63018 00:06:09.760 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:09.760 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_63018 00:06:09.760 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:09.760 associated memzone info: size: 1.007996 MiB name: MP_evtpool_63018 00:06:09.760 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:09.760 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:09.760 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:09.760 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:09.760 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:09.760 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:09.760 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:09.760 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:09.760 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:09.760 associated memzone info: size: 1.000366 MiB name: RG_ring_0_63018 00:06:09.760 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:09.760 associated memzone info: size: 1.000366 MiB name: RG_ring_1_63018 00:06:09.760 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:09.760 associated memzone info: size: 1.000366 MiB name: RG_ring_4_63018 00:06:09.760 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:09.760 associated memzone info: size: 1.000366 MiB name: RG_ring_5_63018 00:06:09.760 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:09.760 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_63018 00:06:09.760 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:06:09.760 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:09.760 element at address: 0x200013878680 with size: 0.500549 MiB 00:06:09.760 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:09.760 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:06:09.760 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:09.760 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:09.760 associated memzone info: size: 0.125366 MiB name: RG_ring_2_63018 00:06:09.760 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:06:09.760 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:09.760 element at address: 0x200028464140 with size: 0.023804 MiB 00:06:09.760 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:09.760 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:09.760 associated memzone info: size: 0.015991 MiB name: RG_ring_3_63018 00:06:09.760 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:06:09.760 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:09.760 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:06:09.760 associated memzone info: size: 0.000183 MiB name: MP_msgpool_63018 00:06:09.760 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:09.760 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_63018 00:06:09.760 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:06:09.760 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:09.760 09:54:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:09.760 09:54:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 63018 00:06:09.760 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 63018 ']' 00:06:09.760 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 63018 00:06:09.760 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:06:09.760 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:09.760 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 63018 00:06:09.760 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:09.760 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:09.760 killing process with pid 63018 00:06:09.760 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 63018' 00:06:09.760 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 63018 00:06:09.760 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 63018 00:06:12.294 ************************************ 00:06:12.294 END TEST dpdk_mem_utility 00:06:12.294 ************************************ 00:06:12.294 00:06:12.294 real 0m3.577s 00:06:12.294 user 0m3.763s 00:06:12.294 sys 0m0.420s 00:06:12.294 09:55:01 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:12.294 09:55:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:12.294 09:55:01 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:12.294 09:55:01 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:12.294 09:55:01 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:12.294 09:55:01 -- common/autotest_common.sh@10 -- # set +x 00:06:12.294 ************************************ 00:06:12.294 START TEST event 00:06:12.294 ************************************ 00:06:12.294 09:55:01 event -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:12.294 * Looking for test storage... 00:06:12.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:12.294 09:55:01 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:12.294 09:55:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:12.294 09:55:01 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:12.294 09:55:01 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:12.294 09:55:01 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:12.294 09:55:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.294 ************************************ 00:06:12.294 START TEST event_perf 00:06:12.294 ************************************ 00:06:12.294 09:55:01 event.event_perf -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:12.294 Running I/O for 1 seconds...[2024-06-10 09:55:01.370854] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:12.294 [2024-06-10 09:55:01.371005] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63118 ] 00:06:12.294 [2024-06-10 09:55:01.545159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.294 [2024-06-10 09:55:01.781072] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.294 Running I/O for 1 seconds...[2024-06-10 09:55:01.781213] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.294 [2024-06-10 09:55:01.781289] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.294 [2024-06-10 09:55:01.781514] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.670 00:06:13.670 lcore 0: 185316 00:06:13.670 lcore 1: 185316 00:06:13.670 lcore 2: 185317 00:06:13.670 lcore 3: 185316 00:06:13.929 done. 00:06:13.929 00:06:13.929 real 0m1.876s 00:06:13.929 user 0m4.635s 00:06:13.929 sys 0m0.110s 00:06:13.929 09:55:03 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:13.929 09:55:03 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.929 ************************************ 00:06:13.929 END TEST event_perf 00:06:13.929 ************************************ 00:06:13.929 09:55:03 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:13.929 09:55:03 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:13.929 09:55:03 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:13.929 09:55:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.929 ************************************ 00:06:13.929 START TEST event_reactor 00:06:13.929 ************************************ 00:06:13.929 09:55:03 event.event_reactor -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:13.929 [2024-06-10 09:55:03.297859] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:13.929 [2024-06-10 09:55:03.298007] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63158 ] 00:06:14.188 [2024-06-10 09:55:03.471949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.447 [2024-06-10 09:55:03.705760] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.823 test_start 00:06:15.823 oneshot 00:06:15.823 tick 100 00:06:15.823 tick 100 00:06:15.823 tick 250 00:06:15.823 tick 100 00:06:15.823 tick 100 00:06:15.823 tick 100 00:06:15.823 tick 250 00:06:15.823 tick 500 00:06:15.823 tick 100 00:06:15.823 tick 100 00:06:15.823 tick 250 00:06:15.823 tick 100 00:06:15.823 tick 100 00:06:15.823 test_end 00:06:15.823 ************************************ 00:06:15.823 END TEST event_reactor 00:06:15.823 ************************************ 00:06:15.823 00:06:15.823 real 0m1.837s 00:06:15.823 user 0m1.622s 00:06:15.823 sys 0m0.105s 00:06:15.823 09:55:05 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.823 09:55:05 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:15.823 09:55:05 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:15.823 09:55:05 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:15.823 09:55:05 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.823 09:55:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.823 ************************************ 00:06:15.823 START TEST event_reactor_perf 00:06:15.823 ************************************ 00:06:15.823 09:55:05 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:15.823 [2024-06-10 09:55:05.188570] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:15.823 [2024-06-10 09:55:05.188746] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63200 ] 00:06:16.096 [2024-06-10 09:55:05.360079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.096 [2024-06-10 09:55:05.589593] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.479 test_start 00:06:17.479 test_end 00:06:17.479 Performance: 276249 events per second 00:06:17.479 00:06:17.479 real 0m1.843s 00:06:17.479 user 0m1.639s 00:06:17.479 sys 0m0.093s 00:06:17.479 09:55:06 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:17.479 09:55:06 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.479 ************************************ 00:06:17.479 END TEST event_reactor_perf 00:06:17.479 ************************************ 00:06:17.738 09:55:07 event -- event/event.sh@49 -- # uname -s 00:06:17.738 09:55:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:17.738 09:55:07 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:17.738 09:55:07 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:17.738 09:55:07 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:17.738 09:55:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.738 ************************************ 00:06:17.738 START TEST event_scheduler 00:06:17.738 ************************************ 00:06:17.738 09:55:07 event.event_scheduler -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:17.738 * Looking for test storage... 00:06:17.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:17.738 09:55:07 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:17.738 09:55:07 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63268 00:06:17.738 09:55:07 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:17.738 09:55:07 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.738 09:55:07 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63268 00:06:17.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.738 09:55:07 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 63268 ']' 00:06:17.738 09:55:07 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.738 09:55:07 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:17.738 09:55:07 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.738 09:55:07 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:17.738 09:55:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.738 [2024-06-10 09:55:07.212404] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:17.738 [2024-06-10 09:55:07.212830] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63268 ] 00:06:17.997 [2024-06-10 09:55:07.378514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:18.256 [2024-06-10 09:55:07.614928] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.256 [2024-06-10 09:55:07.615235] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.256 [2024-06-10 09:55:07.615073] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.256 [2024-06-10 09:55:07.615216] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.823 09:55:08 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:18.823 09:55:08 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:06:18.823 09:55:08 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:18.823 09:55:08 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:18.823 09:55:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.823 POWER: Env isn't set yet! 00:06:18.823 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:18.823 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:18.823 POWER: Cannot set governor of lcore 0 to userspace 00:06:18.823 POWER: Attempting to initialise PSTAT power management... 00:06:18.823 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:18.823 POWER: Cannot set governor of lcore 0 to performance 00:06:18.823 POWER: Attempting to initialise AMD PSTATE power management... 00:06:18.823 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:18.823 POWER: Cannot set governor of lcore 0 to userspace 00:06:18.823 POWER: Attempting to initialise CPPC power management... 00:06:18.823 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:18.823 POWER: Cannot set governor of lcore 0 to userspace 00:06:18.823 POWER: Attempting to initialise VM power management... 00:06:18.823 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:18.823 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:18.823 POWER: Unable to set Power Management Environment for lcore 0 00:06:18.823 [2024-06-10 09:55:08.230263] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:18.823 [2024-06-10 09:55:08.230320] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:18.823 [2024-06-10 09:55:08.230363] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:18.823 [2024-06-10 09:55:08.230418] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:18.823 [2024-06-10 09:55:08.230483] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:18.823 [2024-06-10 09:55:08.230536] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:18.823 09:55:08 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:18.823 09:55:08 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:18.823 09:55:08 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:18.823 09:55:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.082 [2024-06-10 09:55:08.481747] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:19.082 09:55:08 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.082 09:55:08 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:19.082 09:55:08 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:19.082 09:55:08 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:19.082 09:55:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.082 ************************************ 00:06:19.082 START TEST scheduler_create_thread 00:06:19.082 ************************************ 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.082 2 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.082 3 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.082 4 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.082 5 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.082 6 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.082 7 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.082 8 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.082 9 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.082 10 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:19.082 09:55:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:19.083 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.083 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.083 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.083 09:55:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:19.083 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.083 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.458 09:55:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:20.458 09:55:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:20.458 09:55:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:20.458 09:55:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:20.458 09:55:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.393 09:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:21.393 ************************************ 00:06:21.393 END TEST scheduler_create_thread 00:06:21.393 ************************************ 00:06:21.393 00:06:21.393 real 0m2.139s 00:06:21.393 user 0m0.017s 00:06:21.393 sys 0m0.004s 00:06:21.393 09:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:21.393 09:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.393 09:55:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:21.393 09:55:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63268 00:06:21.393 09:55:10 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 63268 ']' 00:06:21.393 09:55:10 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 63268 00:06:21.393 09:55:10 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:06:21.393 09:55:10 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:21.393 09:55:10 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 63268 00:06:21.393 killing process with pid 63268 00:06:21.393 09:55:10 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:21.393 09:55:10 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:21.393 09:55:10 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 63268' 00:06:21.393 09:55:10 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 63268 00:06:21.393 09:55:10 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 63268 00:06:21.651 [2024-06-10 09:55:11.111033] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:23.026 00:06:23.026 real 0m5.204s 00:06:23.026 user 0m8.890s 00:06:23.026 sys 0m0.413s 00:06:23.026 09:55:12 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:23.026 ************************************ 00:06:23.026 END TEST event_scheduler 00:06:23.026 ************************************ 00:06:23.026 09:55:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.026 09:55:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:23.026 09:55:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:23.026 09:55:12 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:23.026 09:55:12 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:23.026 09:55:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.026 ************************************ 00:06:23.026 START TEST app_repeat 00:06:23.026 ************************************ 00:06:23.026 09:55:12 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:06:23.026 09:55:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.026 09:55:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.026 09:55:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:23.026 09:55:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.026 09:55:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:23.026 09:55:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:23.026 09:55:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:23.026 09:55:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63374 00:06:23.026 09:55:12 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:23.026 09:55:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.026 Process app_repeat pid: 63374 00:06:23.026 spdk_app_start Round 0 00:06:23.026 09:55:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63374' 00:06:23.026 09:55:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:23.026 09:55:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:23.026 09:55:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63374 /var/tmp/spdk-nbd.sock 00:06:23.026 09:55:12 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 63374 ']' 00:06:23.026 09:55:12 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.026 09:55:12 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:23.026 09:55:12 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.026 09:55:12 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:23.026 09:55:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.026 [2024-06-10 09:55:12.353653] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:23.026 [2024-06-10 09:55:12.353820] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63374 ] 00:06:23.026 [2024-06-10 09:55:12.516654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.284 [2024-06-10 09:55:12.701029] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.284 [2024-06-10 09:55:12.701043] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.850 09:55:13 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:23.850 09:55:13 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:23.850 09:55:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.108 Malloc0 00:06:24.367 09:55:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.625 Malloc1 00:06:24.625 09:55:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.625 09:55:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.625 09:55:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.625 09:55:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.625 09:55:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.625 09:55:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.625 09:55:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.625 09:55:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.625 09:55:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.625 09:55:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.625 09:55:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.625 09:55:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.625 09:55:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.625 09:55:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.625 09:55:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.625 09:55:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.884 /dev/nbd0 00:06:24.884 09:55:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.884 09:55:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.884 09:55:14 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:24.884 09:55:14 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:24.884 09:55:14 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:24.884 09:55:14 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:24.884 09:55:14 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:24.884 09:55:14 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:24.884 09:55:14 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:24.884 09:55:14 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:24.884 09:55:14 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.884 1+0 records in 00:06:24.884 1+0 records out 00:06:24.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251492 s, 16.3 MB/s 00:06:24.884 09:55:14 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.884 09:55:14 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:24.884 09:55:14 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.884 09:55:14 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:24.884 09:55:14 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:24.884 09:55:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.884 09:55:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.884 09:55:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:25.143 /dev/nbd1 00:06:25.143 09:55:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:25.143 09:55:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:25.143 09:55:14 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:25.143 09:55:14 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:25.143 09:55:14 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:25.143 09:55:14 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:25.143 09:55:14 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:25.143 09:55:14 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:25.143 09:55:14 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:25.143 09:55:14 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:25.143 09:55:14 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.143 1+0 records in 00:06:25.143 1+0 records out 00:06:25.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532736 s, 7.7 MB/s 00:06:25.143 09:55:14 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.143 09:55:14 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:25.143 09:55:14 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.143 09:55:14 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:25.143 09:55:14 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:25.143 09:55:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.143 09:55:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.143 09:55:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.143 09:55:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.143 09:55:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.401 09:55:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.401 { 00:06:25.401 "nbd_device": "/dev/nbd0", 00:06:25.401 "bdev_name": "Malloc0" 00:06:25.401 }, 00:06:25.401 { 00:06:25.401 "nbd_device": "/dev/nbd1", 00:06:25.401 "bdev_name": "Malloc1" 00:06:25.401 } 00:06:25.401 ]' 00:06:25.401 09:55:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.401 09:55:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.401 { 00:06:25.401 "nbd_device": "/dev/nbd0", 00:06:25.401 "bdev_name": "Malloc0" 00:06:25.401 }, 00:06:25.401 { 00:06:25.401 "nbd_device": "/dev/nbd1", 00:06:25.401 "bdev_name": "Malloc1" 00:06:25.401 } 00:06:25.401 ]' 00:06:25.401 09:55:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.401 /dev/nbd1' 00:06:25.401 09:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.401 /dev/nbd1' 00:06:25.401 09:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.401 09:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.401 09:55:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.401 09:55:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.401 09:55:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.401 09:55:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.401 09:55:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.402 256+0 records in 00:06:25.402 256+0 records out 00:06:25.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00844515 s, 124 MB/s 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.402 256+0 records in 00:06:25.402 256+0 records out 00:06:25.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256167 s, 40.9 MB/s 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.402 256+0 records in 00:06:25.402 256+0 records out 00:06:25.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308472 s, 34.0 MB/s 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.402 09:55:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.660 09:55:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.660 09:55:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.660 09:55:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.660 09:55:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.660 09:55:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.660 09:55:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.660 09:55:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.660 09:55:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.660 09:55:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.660 09:55:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.918 09:55:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.918 09:55:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.918 09:55:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.918 09:55:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.918 09:55:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.918 09:55:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:26.176 09:55:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.177 09:55:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.177 09:55:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.177 09:55:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.177 09:55:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.435 09:55:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.435 09:55:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.435 09:55:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.435 09:55:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.435 09:55:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.435 09:55:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.435 09:55:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:26.435 09:55:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.435 09:55:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.435 09:55:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.435 09:55:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.435 09:55:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.435 09:55:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:26.693 09:55:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.078 [2024-06-10 09:55:17.339301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.078 [2024-06-10 09:55:17.516414] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.078 [2024-06-10 09:55:17.516420] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.337 [2024-06-10 09:55:17.682428] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.337 [2024-06-10 09:55:17.682514] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.719 spdk_app_start Round 1 00:06:29.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.719 09:55:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:29.719 09:55:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:29.719 09:55:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63374 /var/tmp/spdk-nbd.sock 00:06:29.719 09:55:19 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 63374 ']' 00:06:29.719 09:55:19 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.719 09:55:19 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:29.719 09:55:19 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.719 09:55:19 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:29.719 09:55:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.980 09:55:19 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:29.980 09:55:19 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:29.980 09:55:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.566 Malloc0 00:06:30.566 09:55:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.825 Malloc1 00:06:30.825 09:55:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.825 09:55:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.825 09:55:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.825 09:55:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.825 09:55:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.825 09:55:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.825 09:55:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.825 09:55:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.825 09:55:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.825 09:55:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.825 09:55:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.825 09:55:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.825 09:55:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:30.825 09:55:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.825 09:55:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.825 09:55:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.084 /dev/nbd0 00:06:31.084 09:55:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.084 09:55:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.084 09:55:20 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:31.084 09:55:20 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:31.084 09:55:20 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:31.084 09:55:20 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:31.084 09:55:20 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:31.084 09:55:20 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:31.084 09:55:20 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:31.084 09:55:20 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:31.084 09:55:20 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.084 1+0 records in 00:06:31.084 1+0 records out 00:06:31.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296968 s, 13.8 MB/s 00:06:31.084 09:55:20 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.084 09:55:20 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:31.084 09:55:20 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.084 09:55:20 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:31.084 09:55:20 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:31.084 09:55:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.084 09:55:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.084 09:55:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.343 /dev/nbd1 00:06:31.343 09:55:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.343 09:55:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.343 09:55:20 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:31.343 09:55:20 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:31.343 09:55:20 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:31.343 09:55:20 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:31.343 09:55:20 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:31.343 09:55:20 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:31.343 09:55:20 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:31.343 09:55:20 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:31.343 09:55:20 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.343 1+0 records in 00:06:31.343 1+0 records out 00:06:31.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028487 s, 14.4 MB/s 00:06:31.343 09:55:20 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.343 09:55:20 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:31.343 09:55:20 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.344 09:55:20 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:31.344 09:55:20 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:31.344 09:55:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.344 09:55:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.344 09:55:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.344 09:55:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.344 09:55:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.603 09:55:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.603 { 00:06:31.603 "nbd_device": "/dev/nbd0", 00:06:31.603 "bdev_name": "Malloc0" 00:06:31.603 }, 00:06:31.603 { 00:06:31.603 "nbd_device": "/dev/nbd1", 00:06:31.603 "bdev_name": "Malloc1" 00:06:31.603 } 00:06:31.603 ]' 00:06:31.603 09:55:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:31.603 { 00:06:31.603 "nbd_device": "/dev/nbd0", 00:06:31.603 "bdev_name": "Malloc0" 00:06:31.603 }, 00:06:31.603 { 00:06:31.603 "nbd_device": "/dev/nbd1", 00:06:31.603 "bdev_name": "Malloc1" 00:06:31.603 } 00:06:31.603 ]' 00:06:31.603 09:55:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.603 09:55:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:31.603 /dev/nbd1' 00:06:31.603 09:55:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:31.603 /dev/nbd1' 00:06:31.603 09:55:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.603 09:55:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:31.603 09:55:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:31.603 09:55:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:31.603 09:55:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:31.603 09:55:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:31.603 09:55:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.603 09:55:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.603 09:55:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:31.603 09:55:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.604 09:55:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:31.604 09:55:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:31.604 256+0 records in 00:06:31.604 256+0 records out 00:06:31.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00881014 s, 119 MB/s 00:06:31.604 09:55:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.604 09:55:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:31.604 256+0 records in 00:06:31.604 256+0 records out 00:06:31.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259904 s, 40.3 MB/s 00:06:31.604 09:55:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.604 09:55:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:31.863 256+0 records in 00:06:31.863 256+0 records out 00:06:31.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0358588 s, 29.2 MB/s 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.863 09:55:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.123 09:55:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.123 09:55:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.123 09:55:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.123 09:55:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.123 09:55:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.123 09:55:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.123 09:55:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.123 09:55:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.123 09:55:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.123 09:55:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.382 09:55:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.382 09:55:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.382 09:55:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.382 09:55:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.382 09:55:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.382 09:55:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.382 09:55:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.382 09:55:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.382 09:55:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.382 09:55:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.382 09:55:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.641 09:55:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.641 09:55:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.641 09:55:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.641 09:55:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.641 09:55:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.641 09:55:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.641 09:55:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:32.641 09:55:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.641 09:55:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.641 09:55:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.641 09:55:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.641 09:55:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.641 09:55:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.206 09:55:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.143 [2024-06-10 09:55:23.606847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.402 [2024-06-10 09:55:23.769250] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.402 [2024-06-10 09:55:23.769250] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.661 [2024-06-10 09:55:23.930729] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.661 [2024-06-10 09:55:23.930827] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.036 spdk_app_start Round 2 00:06:36.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.036 09:55:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:36.036 09:55:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:36.036 09:55:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63374 /var/tmp/spdk-nbd.sock 00:06:36.036 09:55:25 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 63374 ']' 00:06:36.036 09:55:25 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.036 09:55:25 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:36.036 09:55:25 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.036 09:55:25 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:36.036 09:55:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.295 09:55:25 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:36.295 09:55:25 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:36.295 09:55:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.923 Malloc0 00:06:36.923 09:55:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.923 Malloc1 00:06:36.923 09:55:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.923 09:55:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.923 09:55:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.923 09:55:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.923 09:55:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.923 09:55:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.923 09:55:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.923 09:55:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.923 09:55:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.923 09:55:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.923 09:55:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.923 09:55:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.923 09:55:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.923 09:55:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.923 09:55:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.923 09:55:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.182 /dev/nbd0 00:06:37.182 09:55:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.182 09:55:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.182 09:55:26 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:37.182 09:55:26 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:37.182 09:55:26 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:37.182 09:55:26 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:37.182 09:55:26 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:37.182 09:55:26 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:37.182 09:55:26 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:37.182 09:55:26 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:37.182 09:55:26 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.182 1+0 records in 00:06:37.182 1+0 records out 00:06:37.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266681 s, 15.4 MB/s 00:06:37.182 09:55:26 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.440 09:55:26 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:37.440 09:55:26 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.440 09:55:26 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:37.440 09:55:26 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:37.440 09:55:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.440 09:55:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.440 09:55:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.699 /dev/nbd1 00:06:37.699 09:55:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.699 09:55:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.699 09:55:26 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:37.699 09:55:26 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:37.699 09:55:26 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:37.699 09:55:26 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:37.699 09:55:26 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:37.699 09:55:26 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:37.699 09:55:26 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:37.699 09:55:26 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:37.699 09:55:26 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.700 1+0 records in 00:06:37.700 1+0 records out 00:06:37.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341501 s, 12.0 MB/s 00:06:37.700 09:55:26 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.700 09:55:26 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:37.700 09:55:26 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.700 09:55:26 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:37.700 09:55:26 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:37.700 09:55:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.700 09:55:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.700 09:55:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.700 09:55:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.700 09:55:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.958 { 00:06:37.958 "nbd_device": "/dev/nbd0", 00:06:37.958 "bdev_name": "Malloc0" 00:06:37.958 }, 00:06:37.958 { 00:06:37.958 "nbd_device": "/dev/nbd1", 00:06:37.958 "bdev_name": "Malloc1" 00:06:37.958 } 00:06:37.958 ]' 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.958 { 00:06:37.958 "nbd_device": "/dev/nbd0", 00:06:37.958 "bdev_name": "Malloc0" 00:06:37.958 }, 00:06:37.958 { 00:06:37.958 "nbd_device": "/dev/nbd1", 00:06:37.958 "bdev_name": "Malloc1" 00:06:37.958 } 00:06:37.958 ]' 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.958 /dev/nbd1' 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.958 /dev/nbd1' 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.958 256+0 records in 00:06:37.958 256+0 records out 00:06:37.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00897409 s, 117 MB/s 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.958 256+0 records in 00:06:37.958 256+0 records out 00:06:37.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301729 s, 34.8 MB/s 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.958 256+0 records in 00:06:37.958 256+0 records out 00:06:37.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0393432 s, 26.7 MB/s 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:37.958 09:55:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:38.217 09:55:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:38.217 09:55:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.217 09:55:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.217 09:55:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.217 09:55:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:38.217 09:55:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.217 09:55:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.475 09:55:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.475 09:55:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.475 09:55:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.475 09:55:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.475 09:55:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.475 09:55:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.475 09:55:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.475 09:55:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.475 09:55:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.475 09:55:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.734 09:55:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.734 09:55:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.734 09:55:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.734 09:55:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.734 09:55:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.734 09:55:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.734 09:55:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.734 09:55:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.734 09:55:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.734 09:55:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.734 09:55:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.992 09:55:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.992 09:55:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.992 09:55:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.992 09:55:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.992 09:55:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.992 09:55:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.992 09:55:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:38.992 09:55:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.992 09:55:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.992 09:55:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.992 09:55:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.992 09:55:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.992 09:55:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.559 09:55:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.936 [2024-06-10 09:55:30.085210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.936 [2024-06-10 09:55:30.278519] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.936 [2024-06-10 09:55:30.278523] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.936 [2024-06-10 09:55:30.451467] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.936 [2024-06-10 09:55:30.451533] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.837 09:55:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63374 /var/tmp/spdk-nbd.sock 00:06:42.838 09:55:31 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 63374 ']' 00:06:42.838 09:55:31 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.838 09:55:31 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:42.838 09:55:31 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.838 09:55:31 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:42.838 09:55:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.838 09:55:32 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:42.838 09:55:32 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:42.838 09:55:32 event.app_repeat -- event/event.sh@39 -- # killprocess 63374 00:06:42.838 09:55:32 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 63374 ']' 00:06:42.838 09:55:32 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 63374 00:06:42.838 09:55:32 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:06:42.838 09:55:32 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:42.838 09:55:32 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 63374 00:06:42.838 killing process with pid 63374 00:06:42.838 09:55:32 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:42.838 09:55:32 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:42.838 09:55:32 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 63374' 00:06:42.838 09:55:32 event.app_repeat -- common/autotest_common.sh@968 -- # kill 63374 00:06:42.838 09:55:32 event.app_repeat -- common/autotest_common.sh@973 -- # wait 63374 00:06:43.770 spdk_app_start is called in Round 0. 00:06:43.770 Shutdown signal received, stop current app iteration 00:06:43.770 Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 reinitialization... 00:06:43.770 spdk_app_start is called in Round 1. 00:06:43.770 Shutdown signal received, stop current app iteration 00:06:43.770 Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 reinitialization... 00:06:43.770 spdk_app_start is called in Round 2. 00:06:43.770 Shutdown signal received, stop current app iteration 00:06:43.770 Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 reinitialization... 00:06:43.770 spdk_app_start is called in Round 3. 00:06:43.771 Shutdown signal received, stop current app iteration 00:06:44.028 09:55:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:44.028 09:55:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:44.028 00:06:44.028 real 0m20.989s 00:06:44.028 user 0m45.470s 00:06:44.028 sys 0m2.697s 00:06:44.028 09:55:33 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:44.028 ************************************ 00:06:44.028 END TEST app_repeat 00:06:44.028 ************************************ 00:06:44.028 09:55:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.028 09:55:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:44.028 09:55:33 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:44.028 09:55:33 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:44.028 09:55:33 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:44.028 09:55:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.028 ************************************ 00:06:44.028 START TEST cpu_locks 00:06:44.028 ************************************ 00:06:44.028 09:55:33 event.cpu_locks -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:44.028 * Looking for test storage... 00:06:44.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:44.028 09:55:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:44.028 09:55:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:44.028 09:55:33 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:44.028 09:55:33 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:44.028 09:55:33 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:44.028 09:55:33 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:44.028 09:55:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.028 ************************************ 00:06:44.028 START TEST default_locks 00:06:44.028 ************************************ 00:06:44.028 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:06:44.028 09:55:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=63834 00:06:44.028 09:55:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.028 09:55:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 63834 00:06:44.028 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 63834 ']' 00:06:44.028 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.028 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:44.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.028 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.028 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:44.028 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.287 [2024-06-10 09:55:33.560521] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:44.287 [2024-06-10 09:55:33.560745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63834 ] 00:06:44.287 [2024-06-10 09:55:33.735568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.546 [2024-06-10 09:55:34.032648] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.481 09:55:34 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:45.481 09:55:34 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:06:45.481 09:55:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 63834 00:06:45.481 09:55:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 63834 00:06:45.482 09:55:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.740 09:55:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 63834 00:06:45.740 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 63834 ']' 00:06:45.740 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 63834 00:06:45.740 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:06:45.740 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:45.740 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 63834 00:06:45.740 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:45.740 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:45.740 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 63834' 00:06:45.740 killing process with pid 63834 00:06:45.740 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 63834 00:06:45.740 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 63834 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 63834 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 63834 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 63834 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 63834 ']' 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:48.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.272 ERROR: process (pid: 63834) is no longer running 00:06:48.272 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 845: kill: (63834) - No such process 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.272 00:06:48.272 real 0m3.989s 00:06:48.272 user 0m4.069s 00:06:48.272 sys 0m0.590s 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:48.272 09:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.272 ************************************ 00:06:48.272 END TEST default_locks 00:06:48.272 ************************************ 00:06:48.272 09:55:37 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:48.272 09:55:37 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:48.272 09:55:37 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:48.272 09:55:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.272 ************************************ 00:06:48.272 START TEST default_locks_via_rpc 00:06:48.272 ************************************ 00:06:48.272 09:55:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:06:48.272 09:55:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63905 00:06:48.272 09:55:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 63905 00:06:48.272 09:55:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.272 09:55:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 63905 ']' 00:06:48.272 09:55:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.272 09:55:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:48.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.272 09:55:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.272 09:55:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:48.272 09:55:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.272 [2024-06-10 09:55:37.603763] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:48.272 [2024-06-10 09:55:37.603988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63905 ] 00:06:48.272 [2024-06-10 09:55:37.777605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.530 [2024-06-10 09:55:37.956923] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.465 09:55:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:49.465 09:55:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:49.465 09:55:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:49.465 09:55:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:49.465 09:55:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.465 09:55:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:49.465 09:55:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:49.465 09:55:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:49.465 09:55:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:49.465 09:55:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:49.465 09:55:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:49.465 09:55:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:49.465 09:55:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.465 09:55:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:49.465 09:55:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 63905 00:06:49.465 09:55:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 63905 00:06:49.465 09:55:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.722 09:55:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 63905 00:06:49.722 09:55:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 63905 ']' 00:06:49.722 09:55:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 63905 00:06:49.722 09:55:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:06:49.722 09:55:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:49.722 09:55:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 63905 00:06:49.722 killing process with pid 63905 00:06:49.722 09:55:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:49.722 09:55:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:49.722 09:55:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 63905' 00:06:49.722 09:55:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 63905 00:06:49.722 09:55:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 63905 00:06:51.635 ************************************ 00:06:51.636 END TEST default_locks_via_rpc 00:06:51.636 ************************************ 00:06:51.636 00:06:51.636 real 0m3.632s 00:06:51.636 user 0m3.694s 00:06:51.636 sys 0m0.620s 00:06:51.636 09:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:51.636 09:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.894 09:55:41 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:51.894 09:55:41 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:51.894 09:55:41 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:51.894 09:55:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.894 ************************************ 00:06:51.894 START TEST non_locking_app_on_locked_coremask 00:06:51.894 ************************************ 00:06:51.894 09:55:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:06:51.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.894 09:55:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63979 00:06:51.894 09:55:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63979 /var/tmp/spdk.sock 00:06:51.894 09:55:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.894 09:55:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 63979 ']' 00:06:51.894 09:55:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.894 09:55:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:51.894 09:55:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.894 09:55:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:51.894 09:55:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.894 [2024-06-10 09:55:41.268667] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:51.894 [2024-06-10 09:55:41.268859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63979 ] 00:06:52.152 [2024-06-10 09:55:41.425767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.152 [2024-06-10 09:55:41.601239] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.087 09:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:53.087 09:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:53.087 09:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63995 00:06:53.087 09:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63995 /var/tmp/spdk2.sock 00:06:53.087 09:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:53.087 09:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 63995 ']' 00:06:53.087 09:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.087 09:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:53.087 09:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.087 09:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:53.087 09:55:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.087 [2024-06-10 09:55:42.366081] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:53.087 [2024-06-10 09:55:42.366451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63995 ] 00:06:53.087 [2024-06-10 09:55:42.535450] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.087 [2024-06-10 09:55:42.535522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.653 [2024-06-10 09:55:42.889129] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.027 09:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:55.027 09:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:55.027 09:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63979 00:06:55.027 09:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63979 00:06:55.027 09:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.599 09:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63979 00:06:55.599 09:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 63979 ']' 00:06:55.599 09:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 63979 00:06:55.599 09:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:55.599 09:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:55.599 09:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 63979 00:06:55.599 killing process with pid 63979 00:06:55.599 09:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:55.599 09:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:55.599 09:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 63979' 00:06:55.599 09:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 63979 00:06:55.599 09:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 63979 00:07:00.950 09:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63995 00:07:00.950 09:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 63995 ']' 00:07:00.950 09:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 63995 00:07:00.950 09:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:00.950 09:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:00.950 09:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 63995 00:07:00.950 killing process with pid 63995 00:07:00.950 09:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:00.950 09:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:00.950 09:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 63995' 00:07:00.950 09:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 63995 00:07:00.950 09:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 63995 00:07:02.854 ************************************ 00:07:02.854 END TEST non_locking_app_on_locked_coremask 00:07:02.854 00:07:02.854 real 0m10.871s 00:07:02.854 user 0m11.293s 00:07:02.854 sys 0m1.238s 00:07:02.854 09:55:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:02.854 09:55:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.854 ************************************ 00:07:02.854 09:55:52 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:02.854 09:55:52 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:02.854 09:55:52 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:02.854 09:55:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.854 ************************************ 00:07:02.854 START TEST locking_app_on_unlocked_coremask 00:07:02.854 ************************************ 00:07:02.854 09:55:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:07:02.854 09:55:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64136 00:07:02.854 09:55:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64136 /var/tmp/spdk.sock 00:07:02.854 09:55:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 64136 ']' 00:07:02.854 09:55:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.854 09:55:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:02.854 09:55:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:02.854 09:55:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.854 09:55:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:02.854 09:55:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.854 [2024-06-10 09:55:52.212306] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:02.854 [2024-06-10 09:55:52.212490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64136 ] 00:07:03.114 [2024-06-10 09:55:52.383315] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.114 [2024-06-10 09:55:52.383403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.114 [2024-06-10 09:55:52.621874] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.049 09:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:04.049 09:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:04.049 09:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64152 00:07:04.049 09:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64152 /var/tmp/spdk2.sock 00:07:04.049 09:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:04.049 09:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 64152 ']' 00:07:04.050 09:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.050 09:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:04.050 09:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.050 09:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:04.050 09:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.050 [2024-06-10 09:55:53.488174] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:04.050 [2024-06-10 09:55:53.488557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64152 ] 00:07:04.308 [2024-06-10 09:55:53.669033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.568 [2024-06-10 09:55:54.044509] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.102 09:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:07.102 09:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:07.102 09:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64152 00:07:07.102 09:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64152 00:07:07.102 09:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.669 09:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64136 00:07:07.669 09:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 64136 ']' 00:07:07.669 09:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 64136 00:07:07.669 09:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:07.669 09:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:07.669 09:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 64136 00:07:07.669 killing process with pid 64136 00:07:07.669 09:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:07.669 09:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:07.669 09:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 64136' 00:07:07.669 09:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 64136 00:07:07.669 09:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 64136 00:07:11.896 09:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64152 00:07:11.896 09:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 64152 ']' 00:07:11.896 09:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 64152 00:07:11.896 09:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:11.896 09:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:11.896 09:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 64152 00:07:11.896 killing process with pid 64152 00:07:11.896 09:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:11.896 09:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:11.896 09:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 64152' 00:07:11.896 09:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 64152 00:07:11.896 09:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 64152 00:07:13.798 ************************************ 00:07:13.798 END TEST locking_app_on_unlocked_coremask 00:07:13.798 ************************************ 00:07:13.798 00:07:13.798 real 0m10.888s 00:07:13.798 user 0m11.612s 00:07:13.798 sys 0m1.196s 00:07:13.798 09:56:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:13.798 09:56:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.798 09:56:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:13.798 09:56:03 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:13.798 09:56:03 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:13.798 09:56:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.798 ************************************ 00:07:13.798 START TEST locking_app_on_locked_coremask 00:07:13.798 ************************************ 00:07:13.798 09:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:07:13.798 09:56:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64290 00:07:13.798 09:56:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64290 /var/tmp/spdk.sock 00:07:13.798 09:56:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.798 09:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 64290 ']' 00:07:13.798 09:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.798 09:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:13.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.798 09:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.798 09:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:13.798 09:56:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.798 [2024-06-10 09:56:03.154164] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:13.798 [2024-06-10 09:56:03.154340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64290 ] 00:07:14.057 [2024-06-10 09:56:03.322569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.057 [2024-06-10 09:56:03.493676] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64311 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64311 /var/tmp/spdk2.sock 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 64311 /var/tmp/spdk2.sock 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 64311 /var/tmp/spdk2.sock 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 64311 ']' 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:14.993 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.993 [2024-06-10 09:56:04.290020] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:14.993 [2024-06-10 09:56:04.290366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64311 ] 00:07:14.993 [2024-06-10 09:56:04.467079] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64290 has claimed it. 00:07:14.993 [2024-06-10 09:56:04.467183] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:15.562 ERROR: process (pid: 64311) is no longer running 00:07:15.562 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 845: kill: (64311) - No such process 00:07:15.562 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:15.562 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:07:15.562 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:07:15.562 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:15.562 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:15.562 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:15.562 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64290 00:07:15.562 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64290 00:07:15.562 09:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.129 09:56:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64290 00:07:16.129 09:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 64290 ']' 00:07:16.129 09:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 64290 00:07:16.129 09:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:16.129 09:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:16.129 09:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 64290 00:07:16.129 killing process with pid 64290 00:07:16.129 09:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:16.129 09:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:16.129 09:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 64290' 00:07:16.129 09:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 64290 00:07:16.129 09:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 64290 00:07:18.034 ************************************ 00:07:18.034 END TEST locking_app_on_locked_coremask 00:07:18.034 ************************************ 00:07:18.034 00:07:18.034 real 0m4.390s 00:07:18.034 user 0m4.784s 00:07:18.034 sys 0m0.713s 00:07:18.034 09:56:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:18.034 09:56:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.034 09:56:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:18.034 09:56:07 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:18.034 09:56:07 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:18.034 09:56:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.034 ************************************ 00:07:18.034 START TEST locking_overlapped_coremask 00:07:18.034 ************************************ 00:07:18.034 09:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:07:18.034 09:56:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64379 00:07:18.034 09:56:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64379 /var/tmp/spdk.sock 00:07:18.034 09:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 64379 ']' 00:07:18.034 09:56:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:18.034 09:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.034 09:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:18.034 09:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.034 09:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:18.034 09:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.293 [2024-06-10 09:56:07.588935] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:18.293 [2024-06-10 09:56:07.589114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64379 ] 00:07:18.293 [2024-06-10 09:56:07.759584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.551 [2024-06-10 09:56:07.959058] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.551 [2024-06-10 09:56:07.959156] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.551 [2024-06-10 09:56:07.959174] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64398 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64398 /var/tmp/spdk2.sock 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 64398 /var/tmp/spdk2.sock 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:07:19.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 64398 /var/tmp/spdk2.sock 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 64398 ']' 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:19.486 09:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.486 [2024-06-10 09:56:08.776311] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:19.486 [2024-06-10 09:56:08.776495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64398 ] 00:07:19.486 [2024-06-10 09:56:08.957986] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64379 has claimed it. 00:07:19.486 [2024-06-10 09:56:08.958062] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:20.091 ERROR: process (pid: 64398) is no longer running 00:07:20.091 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 845: kill: (64398) - No such process 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64379 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 64379 ']' 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 64379 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 64379 00:07:20.091 killing process with pid 64379 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 64379' 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 64379 00:07:20.091 09:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 64379 00:07:22.623 ************************************ 00:07:22.623 END TEST locking_overlapped_coremask 00:07:22.623 ************************************ 00:07:22.623 00:07:22.623 real 0m4.075s 00:07:22.623 user 0m10.699s 00:07:22.623 sys 0m0.513s 00:07:22.623 09:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:22.623 09:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.623 09:56:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:22.623 09:56:11 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:22.623 09:56:11 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:22.623 09:56:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.623 ************************************ 00:07:22.623 START TEST locking_overlapped_coremask_via_rpc 00:07:22.623 ************************************ 00:07:22.623 09:56:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:07:22.623 09:56:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64457 00:07:22.623 09:56:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64457 /var/tmp/spdk.sock 00:07:22.623 09:56:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 64457 ']' 00:07:22.623 09:56:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.623 09:56:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:22.623 09:56:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:22.623 09:56:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.623 09:56:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:22.623 09:56:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.623 [2024-06-10 09:56:11.718108] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:22.623 [2024-06-10 09:56:11.718277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64457 ] 00:07:22.623 [2024-06-10 09:56:11.889493] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.623 [2024-06-10 09:56:11.889549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.623 [2024-06-10 09:56:12.081373] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.623 [2024-06-10 09:56:12.081482] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.623 [2024-06-10 09:56:12.081494] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.557 09:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:23.557 09:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:23.557 09:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64479 00:07:23.557 09:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64479 /var/tmp/spdk2.sock 00:07:23.557 09:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:23.557 09:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 64479 ']' 00:07:23.557 09:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.557 09:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:23.557 09:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.557 09:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:23.557 09:56:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.557 [2024-06-10 09:56:12.889103] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:23.557 [2024-06-10 09:56:12.889267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64479 ] 00:07:23.557 [2024-06-10 09:56:13.067986] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:23.557 [2024-06-10 09:56:13.068049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:24.124 [2024-06-10 09:56:13.499295] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.124 [2024-06-10 09:56:13.502727] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.124 [2024-06-10 09:56:13.502740] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.653 [2024-06-10 09:56:15.603843] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64457 has claimed it. 00:07:26.653 request: 00:07:26.653 { 00:07:26.653 "method": "framework_enable_cpumask_locks", 00:07:26.653 "req_id": 1 00:07:26.653 } 00:07:26.653 Got JSON-RPC error response 00:07:26.653 response: 00:07:26.653 { 00:07:26.653 "code": -32603, 00:07:26.653 "message": "Failed to claim CPU core: 2" 00:07:26.653 } 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64457 /var/tmp/spdk.sock 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 64457 ']' 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:26.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64479 /var/tmp/spdk2.sock 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 64479 ']' 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:26.653 09:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.911 09:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:26.911 09:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:26.911 09:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:26.911 09:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:26.911 09:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:26.911 09:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:26.911 ************************************ 00:07:26.911 END TEST locking_overlapped_coremask_via_rpc 00:07:26.911 ************************************ 00:07:26.911 00:07:26.911 real 0m4.624s 00:07:26.911 user 0m1.686s 00:07:26.911 sys 0m0.229s 00:07:26.911 09:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:26.911 09:56:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.911 09:56:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:26.911 09:56:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64457 ]] 00:07:26.911 09:56:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64457 00:07:26.911 09:56:16 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 64457 ']' 00:07:26.911 09:56:16 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 64457 00:07:26.911 09:56:16 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:07:26.911 09:56:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:26.911 09:56:16 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 64457 00:07:26.911 killing process with pid 64457 00:07:26.911 09:56:16 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:26.911 09:56:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:26.911 09:56:16 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 64457' 00:07:26.911 09:56:16 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 64457 00:07:26.911 09:56:16 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 64457 00:07:29.441 09:56:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64479 ]] 00:07:29.441 09:56:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64479 00:07:29.441 09:56:18 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 64479 ']' 00:07:29.441 09:56:18 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 64479 00:07:29.441 09:56:18 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:07:29.441 09:56:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:29.441 09:56:18 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 64479 00:07:29.441 killing process with pid 64479 00:07:29.441 09:56:18 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:07:29.441 09:56:18 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:07:29.441 09:56:18 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 64479' 00:07:29.441 09:56:18 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 64479 00:07:29.441 09:56:18 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 64479 00:07:31.344 09:56:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:31.344 Process with pid 64457 is not found 00:07:31.344 Process with pid 64479 is not found 00:07:31.344 09:56:20 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:31.344 09:56:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64457 ]] 00:07:31.344 09:56:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64457 00:07:31.344 09:56:20 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 64457 ']' 00:07:31.344 09:56:20 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 64457 00:07:31.344 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 953: kill: (64457) - No such process 00:07:31.344 09:56:20 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 64457 is not found' 00:07:31.344 09:56:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64479 ]] 00:07:31.344 09:56:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64479 00:07:31.344 09:56:20 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 64479 ']' 00:07:31.344 09:56:20 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 64479 00:07:31.344 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 953: kill: (64479) - No such process 00:07:31.344 09:56:20 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 64479 is not found' 00:07:31.344 09:56:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:31.344 00:07:31.344 real 0m47.202s 00:07:31.344 user 1m21.618s 00:07:31.344 sys 0m5.992s 00:07:31.344 09:56:20 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:31.344 09:56:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.344 ************************************ 00:07:31.344 END TEST cpu_locks 00:07:31.344 ************************************ 00:07:31.344 00:07:31.344 real 1m19.353s 00:07:31.344 user 2m24.006s 00:07:31.344 sys 0m9.652s 00:07:31.344 09:56:20 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:31.344 09:56:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:31.344 ************************************ 00:07:31.344 END TEST event 00:07:31.344 ************************************ 00:07:31.344 09:56:20 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:31.344 09:56:20 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:31.344 09:56:20 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:31.344 09:56:20 -- common/autotest_common.sh@10 -- # set +x 00:07:31.344 ************************************ 00:07:31.344 START TEST thread 00:07:31.344 ************************************ 00:07:31.344 09:56:20 thread -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:31.344 * Looking for test storage... 00:07:31.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:31.344 09:56:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:31.344 09:56:20 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:07:31.344 09:56:20 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:31.344 09:56:20 thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.344 ************************************ 00:07:31.344 START TEST thread_poller_perf 00:07:31.344 ************************************ 00:07:31.344 09:56:20 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:31.344 [2024-06-10 09:56:20.788509] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:31.344 [2024-06-10 09:56:20.788956] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64662 ] 00:07:31.602 [2024-06-10 09:56:20.964325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.861 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:31.861 [2024-06-10 09:56:21.197637] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.242 ====================================== 00:07:33.242 busy:2213198972 (cyc) 00:07:33.242 total_run_count: 263000 00:07:33.242 tsc_hz: 2200000000 (cyc) 00:07:33.242 ====================================== 00:07:33.242 poller_cost: 8415 (cyc), 3825 (nsec) 00:07:33.242 00:07:33.242 real 0m1.869s 00:07:33.242 user 0m1.643s 00:07:33.242 sys 0m0.113s 00:07:33.242 09:56:22 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:33.242 ************************************ 00:07:33.242 END TEST thread_poller_perf 00:07:33.242 ************************************ 00:07:33.242 09:56:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:33.242 09:56:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:33.242 09:56:22 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:07:33.242 09:56:22 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:33.242 09:56:22 thread -- common/autotest_common.sh@10 -- # set +x 00:07:33.242 ************************************ 00:07:33.242 START TEST thread_poller_perf 00:07:33.242 ************************************ 00:07:33.242 09:56:22 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:33.242 [2024-06-10 09:56:22.731326] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:33.243 [2024-06-10 09:56:22.731551] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64704 ] 00:07:33.501 [2024-06-10 09:56:22.905425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.760 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:33.760 [2024-06-10 09:56:23.094783] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.137 ====================================== 00:07:35.137 busy:2203935470 (cyc) 00:07:35.137 total_run_count: 3493000 00:07:35.137 tsc_hz: 2200000000 (cyc) 00:07:35.137 ====================================== 00:07:35.137 poller_cost: 630 (cyc), 286 (nsec) 00:07:35.137 ************************************ 00:07:35.137 END TEST thread_poller_perf 00:07:35.137 ************************************ 00:07:35.137 00:07:35.137 real 0m1.823s 00:07:35.137 user 0m1.608s 00:07:35.137 sys 0m0.105s 00:07:35.137 09:56:24 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:35.137 09:56:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:35.137 09:56:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:35.137 ************************************ 00:07:35.137 END TEST thread 00:07:35.137 ************************************ 00:07:35.137 00:07:35.137 real 0m3.886s 00:07:35.137 user 0m3.324s 00:07:35.137 sys 0m0.331s 00:07:35.137 09:56:24 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:35.137 09:56:24 thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.137 09:56:24 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:35.137 09:56:24 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:35.137 09:56:24 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:35.137 09:56:24 -- common/autotest_common.sh@10 -- # set +x 00:07:35.137 ************************************ 00:07:35.137 START TEST accel 00:07:35.137 ************************************ 00:07:35.138 09:56:24 accel -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:35.397 * Looking for test storage... 00:07:35.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:35.397 09:56:24 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:35.397 09:56:24 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:35.397 09:56:24 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:35.397 09:56:24 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=64785 00:07:35.397 09:56:24 accel -- accel/accel.sh@63 -- # waitforlisten 64785 00:07:35.397 09:56:24 accel -- common/autotest_common.sh@830 -- # '[' -z 64785 ']' 00:07:35.397 09:56:24 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.397 09:56:24 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:35.397 09:56:24 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:35.397 09:56:24 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.397 09:56:24 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:35.397 09:56:24 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:35.397 09:56:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.397 09:56:24 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.397 09:56:24 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.397 09:56:24 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.397 09:56:24 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.397 09:56:24 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.397 09:56:24 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:35.397 09:56:24 accel -- accel/accel.sh@41 -- # jq -r . 00:07:35.397 [2024-06-10 09:56:24.783437] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:35.397 [2024-06-10 09:56:24.783614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64785 ] 00:07:35.655 [2024-06-10 09:56:24.954916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.655 [2024-06-10 09:56:25.145738] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.592 09:56:25 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:36.592 09:56:25 accel -- common/autotest_common.sh@863 -- # return 0 00:07:36.592 09:56:25 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:36.592 09:56:25 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:36.592 09:56:25 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:36.592 09:56:25 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:36.592 09:56:25 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:36.592 09:56:25 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:36.592 09:56:25 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:36.592 09:56:25 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:36.592 09:56:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.592 09:56:25 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:36.592 09:56:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.592 09:56:26 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.592 09:56:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.592 09:56:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.592 09:56:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.592 09:56:26 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.592 09:56:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.593 09:56:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.593 09:56:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.593 09:56:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.593 09:56:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.593 09:56:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.593 09:56:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.593 09:56:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.593 09:56:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.593 09:56:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.593 09:56:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.593 09:56:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.593 09:56:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.593 09:56:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.593 09:56:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.593 09:56:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.593 09:56:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.593 09:56:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.593 09:56:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.593 09:56:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.593 09:56:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.593 09:56:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.593 09:56:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.593 09:56:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.593 09:56:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.593 09:56:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.593 09:56:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.593 09:56:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.593 09:56:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.593 09:56:26 accel -- accel/accel.sh@75 -- # killprocess 64785 00:07:36.593 09:56:26 accel -- common/autotest_common.sh@949 -- # '[' -z 64785 ']' 00:07:36.593 09:56:26 accel -- common/autotest_common.sh@953 -- # kill -0 64785 00:07:36.593 09:56:26 accel -- common/autotest_common.sh@954 -- # uname 00:07:36.593 09:56:26 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:36.593 09:56:26 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 64785 00:07:36.593 killing process with pid 64785 00:07:36.593 09:56:26 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:36.593 09:56:26 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:36.593 09:56:26 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 64785' 00:07:36.593 09:56:26 accel -- common/autotest_common.sh@968 -- # kill 64785 00:07:36.593 09:56:26 accel -- common/autotest_common.sh@973 -- # wait 64785 00:07:39.126 09:56:28 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:39.126 09:56:28 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:39.126 09:56:28 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:39.126 09:56:28 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:39.126 09:56:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.126 09:56:28 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:07:39.126 09:56:28 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:39.126 09:56:28 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:39.126 09:56:28 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.126 09:56:28 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.126 09:56:28 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.126 09:56:28 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.126 09:56:28 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.126 09:56:28 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:39.126 09:56:28 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:39.126 09:56:28 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:39.126 09:56:28 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:39.126 09:56:28 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:39.126 09:56:28 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:39.126 09:56:28 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:39.126 09:56:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.126 ************************************ 00:07:39.126 START TEST accel_missing_filename 00:07:39.126 ************************************ 00:07:39.126 09:56:28 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:07:39.126 09:56:28 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:07:39.126 09:56:28 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:39.126 09:56:28 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:39.126 09:56:28 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:39.126 09:56:28 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:39.126 09:56:28 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:39.126 09:56:28 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:07:39.126 09:56:28 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:39.126 09:56:28 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:39.126 09:56:28 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.126 09:56:28 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.126 09:56:28 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.126 09:56:28 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.126 09:56:28 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.126 09:56:28 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:39.126 09:56:28 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:39.126 [2024-06-10 09:56:28.451797] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:39.126 [2024-06-10 09:56:28.452011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64855 ] 00:07:39.126 [2024-06-10 09:56:28.618107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.385 [2024-06-10 09:56:28.807387] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.644 [2024-06-10 09:56:29.016333] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.213 [2024-06-10 09:56:29.510655] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:40.471 A filename is required. 00:07:40.471 ************************************ 00:07:40.471 END TEST accel_missing_filename 00:07:40.471 ************************************ 00:07:40.471 09:56:29 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:07:40.471 09:56:29 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:40.471 09:56:29 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:07:40.471 09:56:29 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:07:40.471 09:56:29 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:07:40.471 09:56:29 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:40.471 00:07:40.471 real 0m1.518s 00:07:40.471 user 0m1.315s 00:07:40.471 sys 0m0.155s 00:07:40.471 09:56:29 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:40.471 09:56:29 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:40.471 09:56:29 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:40.471 09:56:29 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:07:40.471 09:56:29 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:40.471 09:56:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.472 ************************************ 00:07:40.472 START TEST accel_compress_verify 00:07:40.472 ************************************ 00:07:40.472 09:56:29 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:40.472 09:56:29 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:07:40.472 09:56:29 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:40.472 09:56:29 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:40.472 09:56:29 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:40.472 09:56:29 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:40.472 09:56:29 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:40.472 09:56:29 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:40.472 09:56:29 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:40.472 09:56:29 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:40.472 09:56:29 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.472 09:56:29 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.472 09:56:29 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.472 09:56:29 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.472 09:56:29 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.472 09:56:29 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:40.472 09:56:29 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:40.730 [2024-06-10 09:56:30.013727] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:40.730 [2024-06-10 09:56:30.013904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64897 ] 00:07:40.730 [2024-06-10 09:56:30.190595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.989 [2024-06-10 09:56:30.415926] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.248 [2024-06-10 09:56:30.617875] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.815 [2024-06-10 09:56:31.074681] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:42.074 00:07:42.074 Compression does not support the verify option, aborting. 00:07:42.074 09:56:31 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:07:42.074 09:56:31 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:42.074 09:56:31 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:07:42.074 ************************************ 00:07:42.074 END TEST accel_compress_verify 00:07:42.074 ************************************ 00:07:42.074 09:56:31 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:07:42.074 09:56:31 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:07:42.074 09:56:31 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:42.074 00:07:42.074 real 0m1.521s 00:07:42.074 user 0m1.302s 00:07:42.074 sys 0m0.158s 00:07:42.074 09:56:31 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:42.074 09:56:31 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:42.074 09:56:31 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:42.074 09:56:31 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:42.074 09:56:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:42.074 09:56:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.074 ************************************ 00:07:42.074 START TEST accel_wrong_workload 00:07:42.074 ************************************ 00:07:42.074 09:56:31 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:07:42.074 09:56:31 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:07:42.074 09:56:31 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:42.074 09:56:31 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:42.074 09:56:31 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:42.074 09:56:31 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:42.074 09:56:31 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:42.074 09:56:31 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:07:42.074 09:56:31 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:42.074 09:56:31 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:42.074 09:56:31 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.074 09:56:31 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.074 09:56:31 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.074 09:56:31 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.074 09:56:31 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.074 09:56:31 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:42.074 09:56:31 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:42.074 Unsupported workload type: foobar 00:07:42.074 [2024-06-10 09:56:31.580448] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:42.334 accel_perf options: 00:07:42.334 [-h help message] 00:07:42.334 [-q queue depth per core] 00:07:42.334 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:42.334 [-T number of threads per core 00:07:42.334 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:42.334 [-t time in seconds] 00:07:42.334 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:42.334 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:42.334 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:42.334 [-l for compress/decompress workloads, name of uncompressed input file 00:07:42.334 [-S for crc32c workload, use this seed value (default 0) 00:07:42.334 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:42.334 [-f for fill workload, use this BYTE value (default 255) 00:07:42.334 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:42.334 [-y verify result if this switch is on] 00:07:42.334 [-a tasks to allocate per core (default: same value as -q)] 00:07:42.334 Can be used to spread operations across a wider range of memory. 00:07:42.334 09:56:31 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:07:42.334 09:56:31 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:42.334 09:56:31 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:42.334 09:56:31 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:42.334 00:07:42.334 real 0m0.070s 00:07:42.334 user 0m0.091s 00:07:42.334 sys 0m0.037s 00:07:42.334 09:56:31 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:42.334 09:56:31 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:42.334 ************************************ 00:07:42.334 END TEST accel_wrong_workload 00:07:42.334 ************************************ 00:07:42.334 09:56:31 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:42.334 09:56:31 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:07:42.334 09:56:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:42.334 09:56:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.334 ************************************ 00:07:42.334 START TEST accel_negative_buffers 00:07:42.334 ************************************ 00:07:42.334 09:56:31 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:42.334 09:56:31 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:07:42.334 09:56:31 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:42.334 09:56:31 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:42.334 09:56:31 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:42.334 09:56:31 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:42.334 09:56:31 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:42.334 09:56:31 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:07:42.334 09:56:31 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:42.334 09:56:31 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:42.334 09:56:31 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.334 09:56:31 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.334 09:56:31 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.334 09:56:31 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.334 09:56:31 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.334 09:56:31 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:42.334 09:56:31 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:42.334 -x option must be non-negative. 00:07:42.334 [2024-06-10 09:56:31.704044] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:42.334 accel_perf options: 00:07:42.335 [-h help message] 00:07:42.335 [-q queue depth per core] 00:07:42.335 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:42.335 [-T number of threads per core 00:07:42.335 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:42.335 [-t time in seconds] 00:07:42.335 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:42.335 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:42.335 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:42.335 [-l for compress/decompress workloads, name of uncompressed input file 00:07:42.335 [-S for crc32c workload, use this seed value (default 0) 00:07:42.335 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:42.335 [-f for fill workload, use this BYTE value (default 255) 00:07:42.335 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:42.335 [-y verify result if this switch is on] 00:07:42.335 [-a tasks to allocate per core (default: same value as -q)] 00:07:42.335 Can be used to spread operations across a wider range of memory. 00:07:42.335 09:56:31 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:07:42.335 09:56:31 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:42.335 09:56:31 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:42.335 ************************************ 00:07:42.335 END TEST accel_negative_buffers 00:07:42.335 ************************************ 00:07:42.335 09:56:31 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:42.335 00:07:42.335 real 0m0.082s 00:07:42.335 user 0m0.098s 00:07:42.335 sys 0m0.042s 00:07:42.335 09:56:31 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:42.335 09:56:31 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:42.335 09:56:31 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:42.335 09:56:31 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:42.335 09:56:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:42.335 09:56:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.335 ************************************ 00:07:42.335 START TEST accel_crc32c 00:07:42.335 ************************************ 00:07:42.335 09:56:31 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:42.335 09:56:31 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:42.335 09:56:31 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:42.335 09:56:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.335 09:56:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.335 09:56:31 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:42.335 09:56:31 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:42.335 09:56:31 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:42.335 09:56:31 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.335 09:56:31 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.335 09:56:31 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.335 09:56:31 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.335 09:56:31 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.335 09:56:31 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:42.335 09:56:31 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:42.335 [2024-06-10 09:56:31.843191] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:42.335 [2024-06-10 09:56:31.843361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64970 ] 00:07:42.593 [2024-06-10 09:56:32.016211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.853 [2024-06-10 09:56:32.211978] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.115 09:56:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.115 09:56:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.115 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.115 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.115 09:56:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.115 09:56:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.115 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.115 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.115 09:56:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:43.115 09:56:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.115 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.115 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.115 09:56:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.115 09:56:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.115 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.115 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.115 09:56:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.116 09:56:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:45.018 09:56:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.018 00:07:45.018 real 0m2.538s 00:07:45.018 user 0m2.286s 00:07:45.018 sys 0m0.150s 00:07:45.018 09:56:34 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:45.018 ************************************ 00:07:45.018 END TEST accel_crc32c 00:07:45.018 ************************************ 00:07:45.018 09:56:34 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:45.018 09:56:34 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:45.018 09:56:34 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:45.018 09:56:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:45.018 09:56:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.018 ************************************ 00:07:45.018 START TEST accel_crc32c_C2 00:07:45.018 ************************************ 00:07:45.018 09:56:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:45.018 09:56:34 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.018 09:56:34 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:45.018 09:56:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.018 09:56:34 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:45.018 09:56:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.018 09:56:34 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:45.018 09:56:34 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.018 09:56:34 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.018 09:56:34 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.018 09:56:34 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.018 09:56:34 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.018 09:56:34 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.018 09:56:34 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:45.018 09:56:34 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:45.018 [2024-06-10 09:56:34.460509] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:45.018 [2024-06-10 09:56:34.461011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65016 ] 00:07:45.276 [2024-06-10 09:56:34.636287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.535 [2024-06-10 09:56:34.835147] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.535 09:56:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.435 00:07:47.435 real 0m2.556s 00:07:47.435 user 0m2.291s 00:07:47.435 sys 0m0.161s 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:47.435 ************************************ 00:07:47.435 END TEST accel_crc32c_C2 00:07:47.435 ************************************ 00:07:47.435 09:56:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:47.694 09:56:36 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:47.694 09:56:36 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:47.694 09:56:36 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:47.694 09:56:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.694 ************************************ 00:07:47.694 START TEST accel_copy 00:07:47.694 ************************************ 00:07:47.694 09:56:36 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:07:47.694 09:56:36 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:47.694 09:56:36 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:47.694 09:56:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.694 09:56:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.694 09:56:36 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:47.694 09:56:36 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:47.694 09:56:36 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:47.694 09:56:36 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.694 09:56:36 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.694 09:56:36 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.694 09:56:36 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.694 09:56:36 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.694 09:56:36 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:47.694 09:56:36 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:47.694 [2024-06-10 09:56:37.049960] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:47.694 [2024-06-10 09:56:37.050172] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65063 ] 00:07:47.953 [2024-06-10 09:56:37.229474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.953 [2024-06-10 09:56:37.422262] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.211 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:48.212 09:56:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:50.112 09:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:50.113 09:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:50.113 09:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:50.113 09:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:50.113 09:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:50.113 09:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:50.113 09:56:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.113 09:56:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:50.113 09:56:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.113 00:07:50.113 real 0m2.523s 00:07:50.113 user 0m2.270s 00:07:50.113 sys 0m0.150s 00:07:50.113 09:56:39 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:50.113 ************************************ 00:07:50.113 END TEST accel_copy 00:07:50.113 ************************************ 00:07:50.113 09:56:39 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:50.113 09:56:39 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:50.113 09:56:39 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:50.113 09:56:39 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:50.113 09:56:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.113 ************************************ 00:07:50.113 START TEST accel_fill 00:07:50.113 ************************************ 00:07:50.113 09:56:39 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:50.113 09:56:39 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:50.113 09:56:39 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:50.113 09:56:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.113 09:56:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.113 09:56:39 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:50.113 09:56:39 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:50.113 09:56:39 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:50.113 09:56:39 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.113 09:56:39 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.113 09:56:39 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.113 09:56:39 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.113 09:56:39 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.113 09:56:39 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:50.113 09:56:39 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:50.113 [2024-06-10 09:56:39.626138] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:50.113 [2024-06-10 09:56:39.626305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65109 ] 00:07:50.371 [2024-06-10 09:56:39.794182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.629 [2024-06-10 09:56:39.982737] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:50.888 09:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:52.788 09:56:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.788 00:07:52.788 real 0m2.499s 00:07:52.788 user 0m2.251s 00:07:52.788 sys 0m0.150s 00:07:52.788 09:56:42 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:52.788 09:56:42 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:52.788 ************************************ 00:07:52.788 END TEST accel_fill 00:07:52.788 ************************************ 00:07:52.788 09:56:42 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:52.788 09:56:42 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:52.788 09:56:42 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:52.788 09:56:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.788 ************************************ 00:07:52.788 START TEST accel_copy_crc32c 00:07:52.788 ************************************ 00:07:52.789 09:56:42 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:07:52.789 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:52.789 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:52.789 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.789 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.789 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:52.789 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:52.789 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:52.789 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.789 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.789 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.789 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.789 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.789 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:52.789 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:52.789 [2024-06-10 09:56:42.165828] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:52.789 [2024-06-10 09:56:42.166031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65156 ] 00:07:53.048 [2024-06-10 09:56:42.342763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.307 [2024-06-10 09:56:42.567439] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.307 09:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.215 00:07:55.215 real 0m2.508s 00:07:55.215 user 0m2.260s 00:07:55.215 sys 0m0.153s 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:55.215 ************************************ 00:07:55.215 END TEST accel_copy_crc32c 00:07:55.215 ************************************ 00:07:55.215 09:56:44 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:55.215 09:56:44 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:55.215 09:56:44 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:55.215 09:56:44 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:55.215 09:56:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.215 ************************************ 00:07:55.215 START TEST accel_copy_crc32c_C2 00:07:55.215 ************************************ 00:07:55.215 09:56:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:55.215 09:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:55.215 09:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:55.215 09:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.215 09:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:55.215 09:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.215 09:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:55.215 09:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.215 09:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.215 09:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.215 09:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.215 09:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.215 09:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.215 09:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:55.215 09:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:55.215 [2024-06-10 09:56:44.722783] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:55.215 [2024-06-10 09:56:44.722933] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65197 ] 00:07:55.474 [2024-06-10 09:56:44.891313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.735 [2024-06-10 09:56:45.085035] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:55.994 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.995 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.995 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.995 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.995 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.995 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.995 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.995 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.995 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.995 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.995 09:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.895 00:07:57.895 real 0m2.467s 00:07:57.895 user 0m2.208s 00:07:57.895 sys 0m0.163s 00:07:57.895 ************************************ 00:07:57.895 END TEST accel_copy_crc32c_C2 00:07:57.895 ************************************ 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:57.895 09:56:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:57.895 09:56:47 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:57.895 09:56:47 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:57.895 09:56:47 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:57.896 09:56:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.896 ************************************ 00:07:57.896 START TEST accel_dualcast 00:07:57.896 ************************************ 00:07:57.896 09:56:47 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:07:57.896 09:56:47 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:57.896 09:56:47 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:57.896 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.896 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.896 09:56:47 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:57.896 09:56:47 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:57.896 09:56:47 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:57.896 09:56:47 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.896 09:56:47 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.896 09:56:47 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.896 09:56:47 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.896 09:56:47 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.896 09:56:47 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:57.896 09:56:47 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:57.896 [2024-06-10 09:56:47.243900] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:57.896 [2024-06-10 09:56:47.244091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65249 ] 00:07:58.154 [2024-06-10 09:56:47.429007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.154 [2024-06-10 09:56:47.617941] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.412 09:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:00.313 ************************************ 00:08:00.313 END TEST accel_dualcast 00:08:00.313 ************************************ 00:08:00.313 09:56:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.313 00:08:00.313 real 0m2.479s 00:08:00.313 user 0m2.232s 00:08:00.313 sys 0m0.149s 00:08:00.313 09:56:49 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:00.313 09:56:49 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:00.313 09:56:49 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:00.313 09:56:49 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:00.313 09:56:49 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:00.313 09:56:49 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.313 ************************************ 00:08:00.313 START TEST accel_compare 00:08:00.313 ************************************ 00:08:00.313 09:56:49 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:08:00.313 09:56:49 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:00.313 09:56:49 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:00.313 09:56:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.313 09:56:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.313 09:56:49 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:00.313 09:56:49 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:00.313 09:56:49 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:00.313 09:56:49 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.313 09:56:49 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.313 09:56:49 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.313 09:56:49 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.313 09:56:49 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.313 09:56:49 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:00.313 09:56:49 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:00.314 [2024-06-10 09:56:49.768363] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:00.314 [2024-06-10 09:56:49.768523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65290 ] 00:08:00.571 [2024-06-10 09:56:49.942417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.828 [2024-06-10 09:56:50.171626] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.086 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.087 09:56:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:02.988 09:56:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.988 00:08:02.988 real 0m2.500s 00:08:02.988 user 0m2.252s 00:08:02.988 sys 0m0.150s 00:08:02.988 09:56:52 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:02.988 09:56:52 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:02.988 ************************************ 00:08:02.988 END TEST accel_compare 00:08:02.988 ************************************ 00:08:02.988 09:56:52 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:02.988 09:56:52 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:02.988 09:56:52 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:02.988 09:56:52 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.988 ************************************ 00:08:02.988 START TEST accel_xor 00:08:02.988 ************************************ 00:08:02.988 09:56:52 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:08:02.988 09:56:52 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:02.988 09:56:52 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:02.988 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.988 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.988 09:56:52 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:02.988 09:56:52 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:02.988 09:56:52 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:02.988 09:56:52 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.988 09:56:52 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.988 09:56:52 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.988 09:56:52 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.988 09:56:52 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.988 09:56:52 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:02.988 09:56:52 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:02.988 [2024-06-10 09:56:52.310803] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:02.988 [2024-06-10 09:56:52.310947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65342 ] 00:08:02.988 [2024-06-10 09:56:52.483186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.248 [2024-06-10 09:56:52.713930] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.507 09:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.461 00:08:05.461 real 0m2.515s 00:08:05.461 user 0m2.265s 00:08:05.461 sys 0m0.152s 00:08:05.461 09:56:54 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:05.461 09:56:54 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:05.461 ************************************ 00:08:05.461 END TEST accel_xor 00:08:05.461 ************************************ 00:08:05.461 09:56:54 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:05.461 09:56:54 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:08:05.461 09:56:54 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:05.461 09:56:54 accel -- common/autotest_common.sh@10 -- # set +x 00:08:05.461 ************************************ 00:08:05.461 START TEST accel_xor 00:08:05.461 ************************************ 00:08:05.461 09:56:54 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:05.461 09:56:54 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:05.461 [2024-06-10 09:56:54.883608] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:05.461 [2024-06-10 09:56:54.883791] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65383 ] 00:08:05.749 [2024-06-10 09:56:55.052656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.749 [2024-06-10 09:56:55.244129] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.032 09:56:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.929 09:56:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.929 09:56:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.929 09:56:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.929 09:56:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.929 09:56:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.929 09:56:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.929 09:56:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.929 09:56:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:07.930 09:56:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.930 00:08:07.930 real 0m2.468s 00:08:07.930 user 0m2.218s 00:08:07.930 sys 0m0.155s 00:08:07.930 09:56:57 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:07.930 ************************************ 00:08:07.930 END TEST accel_xor 00:08:07.930 ************************************ 00:08:07.930 09:56:57 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:07.930 09:56:57 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:07.930 09:56:57 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:08:07.930 09:56:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:07.930 09:56:57 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.930 ************************************ 00:08:07.930 START TEST accel_dif_verify 00:08:07.930 ************************************ 00:08:07.930 09:56:57 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:08:07.930 09:56:57 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:07.930 09:56:57 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:07.930 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.930 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.930 09:56:57 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:07.930 09:56:57 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:07.930 09:56:57 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:07.930 09:56:57 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.930 09:56:57 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.930 09:56:57 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.930 09:56:57 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.930 09:56:57 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.930 09:56:57 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:07.930 09:56:57 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:07.930 [2024-06-10 09:56:57.398800] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:07.930 [2024-06-10 09:56:57.398952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65430 ] 00:08:08.188 [2024-06-10 09:56:57.566549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.446 [2024-06-10 09:56:57.754030] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.446 09:56:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:10.343 09:56:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:10.344 09:56:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:10.344 09:56:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:10.344 09:56:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:10.344 09:56:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:10.344 09:56:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:10.344 09:56:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:10.344 09:56:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:10.344 09:56:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:10.344 09:56:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.344 00:08:10.344 real 0m2.462s 00:08:10.344 user 0m2.226s 00:08:10.344 sys 0m0.141s 00:08:10.344 09:56:59 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:10.344 ************************************ 00:08:10.344 END TEST accel_dif_verify 00:08:10.344 ************************************ 00:08:10.344 09:56:59 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:10.344 09:56:59 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:10.344 09:56:59 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:08:10.344 09:56:59 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:10.344 09:56:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.601 ************************************ 00:08:10.601 START TEST accel_dif_generate 00:08:10.601 ************************************ 00:08:10.601 09:56:59 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:08:10.601 09:56:59 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:10.601 09:56:59 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:10.601 09:56:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.601 09:56:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.601 09:56:59 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:10.601 09:56:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:10.601 09:56:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:10.601 09:56:59 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.601 09:56:59 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.601 09:56:59 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.601 09:56:59 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.601 09:56:59 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.601 09:56:59 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:10.601 09:56:59 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:10.601 [2024-06-10 09:56:59.907277] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:10.601 [2024-06-10 09:56:59.907416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65476 ] 00:08:10.601 [2024-06-10 09:57:00.069049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.859 [2024-06-10 09:57:00.257017] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.116 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:11.116 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.116 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.116 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.116 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:11.116 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.116 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.116 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.116 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:11.116 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.116 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.116 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.116 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:11.116 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.116 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.116 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:11.117 09:57:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:13.014 09:57:02 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.014 00:08:13.014 real 0m2.436s 00:08:13.014 user 0m2.198s 00:08:13.014 sys 0m0.139s 00:08:13.014 ************************************ 00:08:13.014 END TEST accel_dif_generate 00:08:13.014 ************************************ 00:08:13.014 09:57:02 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:13.014 09:57:02 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:13.014 09:57:02 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:13.014 09:57:02 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:08:13.014 09:57:02 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:13.014 09:57:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.014 ************************************ 00:08:13.014 START TEST accel_dif_generate_copy 00:08:13.014 ************************************ 00:08:13.014 09:57:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:08:13.014 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:13.014 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:13.014 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.014 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.014 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:13.014 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:13.014 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:13.014 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.014 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.014 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.014 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.014 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.014 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:13.014 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:13.014 [2024-06-10 09:57:02.394768] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:13.014 [2024-06-10 09:57:02.394926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65523 ] 00:08:13.272 [2024-06-10 09:57:02.565977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.272 [2024-06-10 09:57:02.753838] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.531 09:57:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.429 00:08:15.429 real 0m2.466s 00:08:15.429 user 0m2.223s 00:08:15.429 sys 0m0.144s 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:15.429 09:57:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:15.429 ************************************ 00:08:15.429 END TEST accel_dif_generate_copy 00:08:15.429 ************************************ 00:08:15.429 09:57:04 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:15.429 09:57:04 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:15.429 09:57:04 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:08:15.429 09:57:04 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:15.429 09:57:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.429 ************************************ 00:08:15.429 START TEST accel_comp 00:08:15.429 ************************************ 00:08:15.429 09:57:04 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:15.429 09:57:04 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:15.429 09:57:04 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:15.429 09:57:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.429 09:57:04 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:15.429 09:57:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.429 09:57:04 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:15.429 09:57:04 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:15.429 09:57:04 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.429 09:57:04 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.430 09:57:04 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.430 09:57:04 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.430 09:57:04 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.430 09:57:04 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:15.430 09:57:04 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:15.430 [2024-06-10 09:57:04.904274] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:15.430 [2024-06-10 09:57:04.904448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65564 ] 00:08:15.689 [2024-06-10 09:57:05.075275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.947 [2024-06-10 09:57:05.306317] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:16.205 09:57:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:18.106 09:57:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:18.106 09:57:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.106 09:57:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:18.106 09:57:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:18.106 09:57:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:18.106 09:57:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.106 09:57:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:18.106 09:57:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:18.106 09:57:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:18.106 09:57:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.106 09:57:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:18.106 09:57:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:18.106 09:57:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:18.106 09:57:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.107 09:57:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:18.107 09:57:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:18.107 09:57:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:18.107 09:57:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.107 09:57:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:18.107 09:57:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:18.107 09:57:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:18.107 09:57:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.107 09:57:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:18.107 09:57:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:18.107 09:57:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.107 09:57:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:18.107 09:57:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.107 00:08:18.107 real 0m2.510s 00:08:18.107 user 0m2.284s 00:08:18.107 sys 0m0.132s 00:08:18.107 09:57:07 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:18.107 09:57:07 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:18.107 ************************************ 00:08:18.107 END TEST accel_comp 00:08:18.107 ************************************ 00:08:18.107 09:57:07 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:18.107 09:57:07 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:08:18.107 09:57:07 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:18.107 09:57:07 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.107 ************************************ 00:08:18.107 START TEST accel_decomp 00:08:18.107 ************************************ 00:08:18.107 09:57:07 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:18.107 09:57:07 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:18.107 09:57:07 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:18.107 09:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.107 09:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.107 09:57:07 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:18.107 09:57:07 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:18.107 09:57:07 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:18.107 09:57:07 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.107 09:57:07 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.107 09:57:07 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.107 09:57:07 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.107 09:57:07 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.107 09:57:07 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:18.107 09:57:07 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:18.107 [2024-06-10 09:57:07.459034] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:18.107 [2024-06-10 09:57:07.459165] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65616 ] 00:08:18.107 [2024-06-10 09:57:07.621231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.365 [2024-06-10 09:57:07.810891] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.622 09:57:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.622 09:57:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.622 09:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.622 09:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.622 09:57:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.622 09:57:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.622 09:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.622 09:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.622 09:57:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.622 09:57:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.622 09:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.622 09:57:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:18.622 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.623 09:57:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:20.521 09:57:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.521 00:08:20.521 real 0m2.443s 00:08:20.521 user 0m2.214s 00:08:20.521 sys 0m0.135s 00:08:20.521 09:57:09 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:20.521 09:57:09 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:20.521 ************************************ 00:08:20.521 END TEST accel_decomp 00:08:20.521 ************************************ 00:08:20.521 09:57:09 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:20.521 09:57:09 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:08:20.521 09:57:09 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:20.521 09:57:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.521 ************************************ 00:08:20.521 START TEST accel_decomp_full 00:08:20.521 ************************************ 00:08:20.521 09:57:09 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:20.521 09:57:09 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:20.521 09:57:09 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:20.521 09:57:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.521 09:57:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.521 09:57:09 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:20.521 09:57:09 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:20.521 09:57:09 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:20.521 09:57:09 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.521 09:57:09 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.521 09:57:09 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.521 09:57:09 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.521 09:57:09 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.521 09:57:09 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:20.521 09:57:09 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:20.521 [2024-06-10 09:57:09.952271] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:20.522 [2024-06-10 09:57:09.952405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65657 ] 00:08:20.780 [2024-06-10 09:57:10.116823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.037 [2024-06-10 09:57:10.306539] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:21.037 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:21.038 09:57:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:22.935 09:57:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.935 00:08:22.935 real 0m2.457s 00:08:22.935 user 0m2.216s 00:08:22.935 sys 0m0.144s 00:08:22.935 09:57:12 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:22.935 ************************************ 00:08:22.935 END TEST accel_decomp_full 00:08:22.935 ************************************ 00:08:22.935 09:57:12 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:22.935 09:57:12 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:22.935 09:57:12 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:08:22.935 09:57:12 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:22.935 09:57:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:22.935 ************************************ 00:08:22.935 START TEST accel_decomp_mcore 00:08:22.935 ************************************ 00:08:22.935 09:57:12 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:22.935 09:57:12 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:22.935 09:57:12 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:22.935 09:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:22.935 09:57:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:22.935 09:57:12 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:22.935 09:57:12 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:22.935 09:57:12 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:22.935 09:57:12 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:22.935 09:57:12 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:22.935 09:57:12 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.935 09:57:12 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.935 09:57:12 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:22.935 09:57:12 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:22.935 09:57:12 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:23.192 [2024-06-10 09:57:12.468804] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:23.192 [2024-06-10 09:57:12.468951] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65704 ] 00:08:23.192 [2024-06-10 09:57:12.643764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.450 [2024-06-10 09:57:12.878750] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.450 [2024-06-10 09:57:12.878907] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.450 [2024-06-10 09:57:12.879030] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.450 [2024-06-10 09:57:12.879045] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.708 09:57:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.607 ************************************ 00:08:25.607 END TEST accel_decomp_mcore 00:08:25.607 ************************************ 00:08:25.607 00:08:25.607 real 0m2.536s 00:08:25.607 user 0m0.013s 00:08:25.607 sys 0m0.006s 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:25.607 09:57:14 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:25.607 09:57:14 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:25.607 09:57:14 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:08:25.607 09:57:14 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:25.607 09:57:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:25.607 ************************************ 00:08:25.607 START TEST accel_decomp_full_mcore 00:08:25.607 ************************************ 00:08:25.607 09:57:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:25.607 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:25.607 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:25.607 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.607 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.607 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:25.607 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:25.607 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:25.607 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:25.607 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:25.607 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.607 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.607 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:25.607 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:25.607 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:25.607 [2024-06-10 09:57:15.050977] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:25.607 [2024-06-10 09:57:15.051134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65753 ] 00:08:25.865 [2024-06-10 09:57:15.227043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.125 [2024-06-10 09:57:15.462206] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.125 [2024-06-10 09:57:15.462337] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.125 [2024-06-10 09:57:15.462759] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.125 [2024-06-10 09:57:15.462760] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.384 09:57:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.283 00:08:28.283 real 0m2.573s 00:08:28.283 user 0m7.381s 00:08:28.283 sys 0m0.170s 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:28.283 ************************************ 00:08:28.283 END TEST accel_decomp_full_mcore 00:08:28.283 ************************************ 00:08:28.283 09:57:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:28.283 09:57:17 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:28.283 09:57:17 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:08:28.283 09:57:17 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:28.283 09:57:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:28.283 ************************************ 00:08:28.283 START TEST accel_decomp_mthread 00:08:28.283 ************************************ 00:08:28.283 09:57:17 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:28.283 09:57:17 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:28.283 09:57:17 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:28.283 09:57:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.283 09:57:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.283 09:57:17 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:28.283 09:57:17 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:28.283 09:57:17 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:28.283 09:57:17 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.283 09:57:17 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.283 09:57:17 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.283 09:57:17 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.283 09:57:17 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.283 09:57:17 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:28.283 09:57:17 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:28.283 [2024-06-10 09:57:17.668406] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:28.283 [2024-06-10 09:57:17.668538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65803 ] 00:08:28.541 [2024-06-10 09:57:17.828211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.541 [2024-06-10 09:57:18.016014] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.799 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.800 09:57:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.729 00:08:30.729 real 0m2.457s 00:08:30.729 user 0m2.223s 00:08:30.729 sys 0m0.137s 00:08:30.729 ************************************ 00:08:30.729 END TEST accel_decomp_mthread 00:08:30.729 ************************************ 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:30.729 09:57:20 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:30.729 09:57:20 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:30.729 09:57:20 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:08:30.729 09:57:20 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:30.729 09:57:20 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.729 ************************************ 00:08:30.729 START TEST accel_decomp_full_mthread 00:08:30.729 ************************************ 00:08:30.729 09:57:20 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:30.729 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:30.729 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:30.729 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:30.729 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:30.729 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:30.729 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:30.729 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:30.729 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.729 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.729 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.729 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.729 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.729 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:30.729 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:30.729 [2024-06-10 09:57:20.195256] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:30.729 [2024-06-10 09:57:20.195718] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65849 ] 00:08:30.987 [2024-06-10 09:57:20.369320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.245 [2024-06-10 09:57:20.557191] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.245 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.504 09:57:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.403 00:08:33.403 real 0m2.490s 00:08:33.403 user 0m2.243s 00:08:33.403 sys 0m0.151s 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:33.403 09:57:22 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:33.403 ************************************ 00:08:33.403 END TEST accel_decomp_full_mthread 00:08:33.403 ************************************ 00:08:33.403 09:57:22 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:33.403 09:57:22 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:33.403 09:57:22 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:33.403 09:57:22 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.403 09:57:22 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.403 09:57:22 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:33.403 09:57:22 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.403 09:57:22 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.403 09:57:22 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:33.403 09:57:22 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.403 09:57:22 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:33.403 09:57:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.403 09:57:22 accel -- accel/accel.sh@41 -- # jq -r . 00:08:33.403 ************************************ 00:08:33.403 START TEST accel_dif_functional_tests 00:08:33.403 ************************************ 00:08:33.403 09:57:22 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:33.403 [2024-06-10 09:57:22.766535] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:33.403 [2024-06-10 09:57:22.766758] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65897 ] 00:08:33.661 [2024-06-10 09:57:22.938518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:33.661 [2024-06-10 09:57:23.119542] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.661 [2024-06-10 09:57:23.119716] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.661 [2024-06-10 09:57:23.119855] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.920 00:08:33.920 00:08:33.920 CUnit - A unit testing framework for C - Version 2.1-3 00:08:33.920 http://cunit.sourceforge.net/ 00:08:33.920 00:08:33.920 00:08:33.920 Suite: accel_dif 00:08:33.920 Test: verify: DIF generated, GUARD check ...passed 00:08:33.920 Test: verify: DIF generated, APPTAG check ...passed 00:08:33.920 Test: verify: DIF generated, REFTAG check ...passed 00:08:33.920 Test: verify: DIF not generated, GUARD check ...passed 00:08:33.920 Test: verify: DIF not generated, APPTAG check ...[2024-06-10 09:57:23.395325] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:33.920 [2024-06-10 09:57:23.395479] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:33.920 passed 00:08:33.920 Test: verify: DIF not generated, REFTAG check ...passed 00:08:33.920 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:33.920 Test: verify: APPTAG incorrect, APPTAG check ...passed[2024-06-10 09:57:23.395652] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:33.920 [2024-06-10 09:57:23.395799] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:33.920 00:08:33.920 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:33.920 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:33.920 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:33.920 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-10 09:57:23.396179] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:33.920 passed 00:08:33.920 Test: verify copy: DIF generated, GUARD check ...passed 00:08:33.920 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:33.920 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:33.920 Test: verify copy: DIF not generated, GUARD check ...passed 00:08:33.920 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-10 09:57:23.396698] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:33.920 passed 00:08:33.920 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-10 09:57:23.396827] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:33.920 [2024-06-10 09:57:23.396897] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:33.920 passed 00:08:33.920 Test: generate copy: DIF generated, GUARD check ...passed 00:08:33.920 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:33.920 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:33.920 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:33.920 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:33.920 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:33.920 Test: generate copy: iovecs-len validate ...passed 00:08:33.920 Test: generate copy: buffer alignment validate ...passed 00:08:33.920 00:08:33.920 [2024-06-10 09:57:23.397573] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:33.920 Run Summary: Type Total Ran Passed Failed Inactive 00:08:33.920 suites 1 1 n/a 0 0 00:08:33.920 tests 26 26 26 0 0 00:08:33.920 asserts 115 115 115 0 n/a 00:08:33.920 00:08:33.920 Elapsed time = 0.007 seconds 00:08:35.295 00:08:35.295 real 0m1.846s 00:08:35.295 user 0m3.472s 00:08:35.295 sys 0m0.224s 00:08:35.295 09:57:24 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:35.295 09:57:24 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:35.295 ************************************ 00:08:35.295 END TEST accel_dif_functional_tests 00:08:35.295 ************************************ 00:08:35.295 00:08:35.295 real 0m59.967s 00:08:35.295 user 1m5.495s 00:08:35.295 sys 0m4.871s 00:08:35.295 09:57:24 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:35.295 ************************************ 00:08:35.295 END TEST accel 00:08:35.295 09:57:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:35.295 ************************************ 00:08:35.295 09:57:24 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:35.295 09:57:24 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:35.295 09:57:24 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:35.295 09:57:24 -- common/autotest_common.sh@10 -- # set +x 00:08:35.295 ************************************ 00:08:35.295 START TEST accel_rpc 00:08:35.295 ************************************ 00:08:35.295 09:57:24 accel_rpc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:35.295 * Looking for test storage... 00:08:35.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:35.295 09:57:24 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:35.295 09:57:24 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=65979 00:08:35.295 09:57:24 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:35.295 09:57:24 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 65979 00:08:35.295 09:57:24 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 65979 ']' 00:08:35.295 09:57:24 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.295 09:57:24 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:35.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.295 09:57:24 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.295 09:57:24 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:35.295 09:57:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.553 [2024-06-10 09:57:24.827294] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:35.553 [2024-06-10 09:57:24.827487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65979 ] 00:08:35.554 [2024-06-10 09:57:25.007121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.812 [2024-06-10 09:57:25.248539] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.379 09:57:25 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:36.379 09:57:25 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:36.379 09:57:25 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:36.379 09:57:25 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:36.379 09:57:25 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:36.379 09:57:25 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:36.379 09:57:25 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:36.379 09:57:25 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:36.379 09:57:25 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:36.379 09:57:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.379 ************************************ 00:08:36.379 START TEST accel_assign_opcode 00:08:36.379 ************************************ 00:08:36.379 09:57:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:08:36.379 09:57:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:36.379 09:57:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:36.379 09:57:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:36.379 [2024-06-10 09:57:25.713490] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:36.379 09:57:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:36.379 09:57:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:36.379 09:57:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:36.379 09:57:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:36.379 [2024-06-10 09:57:25.721495] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:36.379 09:57:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:36.379 09:57:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:36.379 09:57:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:36.379 09:57:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:36.946 09:57:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:36.946 09:57:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:36.946 09:57:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:36.946 09:57:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:36.946 09:57:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:36.946 09:57:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:36.946 09:57:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:36.946 software 00:08:36.946 00:08:36.946 real 0m0.746s 00:08:36.946 user 0m0.052s 00:08:36.946 sys 0m0.009s 00:08:36.946 09:57:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:36.946 09:57:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:36.946 ************************************ 00:08:36.946 END TEST accel_assign_opcode 00:08:36.946 ************************************ 00:08:37.204 09:57:26 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 65979 00:08:37.204 09:57:26 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 65979 ']' 00:08:37.204 09:57:26 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 65979 00:08:37.204 09:57:26 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:08:37.204 09:57:26 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:37.204 09:57:26 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 65979 00:08:37.204 09:57:26 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:37.204 09:57:26 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:37.204 killing process with pid 65979 00:08:37.204 09:57:26 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 65979' 00:08:37.204 09:57:26 accel_rpc -- common/autotest_common.sh@968 -- # kill 65979 00:08:37.204 09:57:26 accel_rpc -- common/autotest_common.sh@973 -- # wait 65979 00:08:39.729 00:08:39.729 real 0m4.013s 00:08:39.729 user 0m4.017s 00:08:39.729 sys 0m0.497s 00:08:39.729 09:57:28 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:39.729 ************************************ 00:08:39.729 END TEST accel_rpc 00:08:39.729 ************************************ 00:08:39.729 09:57:28 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.729 09:57:28 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:39.729 09:57:28 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:39.729 09:57:28 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:39.729 09:57:28 -- common/autotest_common.sh@10 -- # set +x 00:08:39.729 ************************************ 00:08:39.729 START TEST app_cmdline 00:08:39.729 ************************************ 00:08:39.730 09:57:28 app_cmdline -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:39.730 * Looking for test storage... 00:08:39.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:39.730 09:57:28 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:39.730 09:57:28 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=66095 00:08:39.730 09:57:28 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 66095 00:08:39.730 09:57:28 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:39.730 09:57:28 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 66095 ']' 00:08:39.730 09:57:28 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.730 09:57:28 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:39.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.730 09:57:28 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.730 09:57:28 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:39.730 09:57:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:39.730 [2024-06-10 09:57:28.871949] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:39.730 [2024-06-10 09:57:28.872118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66095 ] 00:08:39.730 [2024-06-10 09:57:29.042741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.988 [2024-06-10 09:57:29.250428] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.554 09:57:29 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:40.554 09:57:29 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:08:40.554 09:57:29 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:40.813 { 00:08:40.813 "version": "SPDK v24.09-pre git sha1 0a5aebcde", 00:08:40.813 "fields": { 00:08:40.813 "major": 24, 00:08:40.813 "minor": 9, 00:08:40.813 "patch": 0, 00:08:40.813 "suffix": "-pre", 00:08:40.813 "commit": "0a5aebcde" 00:08:40.813 } 00:08:40.813 } 00:08:40.813 09:57:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:40.813 09:57:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:40.813 09:57:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:40.813 09:57:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:40.813 09:57:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:40.813 09:57:30 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:40.813 09:57:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:40.813 09:57:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:40.813 09:57:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:40.813 09:57:30 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:40.813 09:57:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:40.813 09:57:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:40.813 09:57:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:40.813 09:57:30 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:08:40.813 09:57:30 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:40.813 09:57:30 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:40.813 09:57:30 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:40.813 09:57:30 app_cmdline -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:40.813 09:57:30 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:40.813 09:57:30 app_cmdline -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:40.813 09:57:30 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:40.813 09:57:30 app_cmdline -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:40.813 09:57:30 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:40.813 09:57:30 app_cmdline -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:41.071 request: 00:08:41.071 { 00:08:41.071 "method": "env_dpdk_get_mem_stats", 00:08:41.071 "req_id": 1 00:08:41.071 } 00:08:41.071 Got JSON-RPC error response 00:08:41.071 response: 00:08:41.071 { 00:08:41.071 "code": -32601, 00:08:41.071 "message": "Method not found" 00:08:41.071 } 00:08:41.071 09:57:30 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:08:41.071 09:57:30 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:41.071 09:57:30 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:41.071 09:57:30 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:41.071 09:57:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 66095 00:08:41.071 09:57:30 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 66095 ']' 00:08:41.071 09:57:30 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 66095 00:08:41.071 09:57:30 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:08:41.071 09:57:30 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:41.071 09:57:30 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 66095 00:08:41.071 killing process with pid 66095 00:08:41.071 09:57:30 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:41.071 09:57:30 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:41.071 09:57:30 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 66095' 00:08:41.071 09:57:30 app_cmdline -- common/autotest_common.sh@968 -- # kill 66095 00:08:41.071 09:57:30 app_cmdline -- common/autotest_common.sh@973 -- # wait 66095 00:08:43.621 00:08:43.621 real 0m3.997s 00:08:43.621 user 0m4.549s 00:08:43.621 sys 0m0.508s 00:08:43.621 09:57:32 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:43.621 ************************************ 00:08:43.621 END TEST app_cmdline 00:08:43.621 ************************************ 00:08:43.621 09:57:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:43.621 09:57:32 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:43.621 09:57:32 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:43.621 09:57:32 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:43.621 09:57:32 -- common/autotest_common.sh@10 -- # set +x 00:08:43.621 ************************************ 00:08:43.621 START TEST version 00:08:43.621 ************************************ 00:08:43.621 09:57:32 version -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:43.621 * Looking for test storage... 00:08:43.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:43.621 09:57:32 version -- app/version.sh@17 -- # get_header_version major 00:08:43.621 09:57:32 version -- app/version.sh@14 -- # cut -f2 00:08:43.621 09:57:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:43.621 09:57:32 version -- app/version.sh@14 -- # tr -d '"' 00:08:43.621 09:57:32 version -- app/version.sh@17 -- # major=24 00:08:43.621 09:57:32 version -- app/version.sh@18 -- # get_header_version minor 00:08:43.621 09:57:32 version -- app/version.sh@14 -- # cut -f2 00:08:43.621 09:57:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:43.621 09:57:32 version -- app/version.sh@14 -- # tr -d '"' 00:08:43.621 09:57:32 version -- app/version.sh@18 -- # minor=9 00:08:43.621 09:57:32 version -- app/version.sh@19 -- # get_header_version patch 00:08:43.621 09:57:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:43.621 09:57:32 version -- app/version.sh@14 -- # cut -f2 00:08:43.621 09:57:32 version -- app/version.sh@14 -- # tr -d '"' 00:08:43.621 09:57:32 version -- app/version.sh@19 -- # patch=0 00:08:43.621 09:57:32 version -- app/version.sh@20 -- # get_header_version suffix 00:08:43.621 09:57:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:43.621 09:57:32 version -- app/version.sh@14 -- # cut -f2 00:08:43.621 09:57:32 version -- app/version.sh@14 -- # tr -d '"' 00:08:43.621 09:57:32 version -- app/version.sh@20 -- # suffix=-pre 00:08:43.621 09:57:32 version -- app/version.sh@22 -- # version=24.9 00:08:43.621 09:57:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:43.621 09:57:32 version -- app/version.sh@28 -- # version=24.9rc0 00:08:43.621 09:57:32 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:43.621 09:57:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:43.621 09:57:32 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:43.621 09:57:32 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:43.621 00:08:43.621 real 0m0.148s 00:08:43.621 user 0m0.080s 00:08:43.621 sys 0m0.097s 00:08:43.621 ************************************ 00:08:43.621 END TEST version 00:08:43.621 ************************************ 00:08:43.621 09:57:32 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:43.621 09:57:32 version -- common/autotest_common.sh@10 -- # set +x 00:08:43.621 09:57:32 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:43.621 09:57:32 -- spdk/autotest.sh@198 -- # uname -s 00:08:43.621 09:57:32 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:43.621 09:57:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:43.621 09:57:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:43.621 09:57:32 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:08:43.621 09:57:32 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:43.621 09:57:32 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:43.621 09:57:32 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:43.621 09:57:32 -- common/autotest_common.sh@10 -- # set +x 00:08:43.621 ************************************ 00:08:43.621 START TEST blockdev_nvme 00:08:43.621 ************************************ 00:08:43.621 09:57:32 blockdev_nvme -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:43.621 * Looking for test storage... 00:08:43.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:43.621 09:57:32 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:43.621 09:57:32 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:43.621 09:57:32 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:43.621 09:57:32 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:43.621 09:57:32 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:43.621 09:57:32 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:43.621 09:57:32 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:43.621 09:57:32 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:43.622 09:57:32 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:43.622 09:57:32 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:08:43.622 09:57:32 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:08:43.622 09:57:32 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:08:43.622 09:57:32 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:08:43.622 09:57:33 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:08:43.622 09:57:33 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:08:43.622 09:57:33 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:08:43.622 09:57:33 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:08:43.622 09:57:33 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:08:43.622 09:57:33 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:08:43.622 09:57:33 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:08:43.622 09:57:33 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:08:43.622 09:57:33 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:08:43.622 09:57:33 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:08:43.622 09:57:33 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:08:43.622 09:57:33 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66262 00:08:43.622 09:57:33 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:43.622 09:57:33 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:43.622 09:57:33 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 66262 00:08:43.622 09:57:33 blockdev_nvme -- common/autotest_common.sh@830 -- # '[' -z 66262 ']' 00:08:43.622 09:57:33 blockdev_nvme -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.622 09:57:33 blockdev_nvme -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:43.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.622 09:57:33 blockdev_nvme -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.622 09:57:33 blockdev_nvme -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:43.622 09:57:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.622 [2024-06-10 09:57:33.120328] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:43.622 [2024-06-10 09:57:33.120511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66262 ] 00:08:43.879 [2024-06-10 09:57:33.291587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.137 [2024-06-10 09:57:33.510020] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.703 09:57:34 blockdev_nvme -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:44.703 09:57:34 blockdev_nvme -- common/autotest_common.sh@863 -- # return 0 00:08:44.703 09:57:34 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:08:44.703 09:57:34 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:08:44.703 09:57:34 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:44.703 09:57:34 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:44.703 09:57:34 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:44.962 09:57:34 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:44.962 09:57:34 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:44.962 09:57:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.220 09:57:34 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:45.220 09:57:34 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:08:45.220 09:57:34 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:45.220 09:57:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.220 09:57:34 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:45.220 09:57:34 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:08:45.220 09:57:34 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:08:45.220 09:57:34 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:45.220 09:57:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.220 09:57:34 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:45.220 09:57:34 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:08:45.221 09:57:34 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:45.221 09:57:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.221 09:57:34 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:45.221 09:57:34 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:45.221 09:57:34 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:45.221 09:57:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.221 09:57:34 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:45.221 09:57:34 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:08:45.221 09:57:34 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:08:45.221 09:57:34 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:08:45.221 09:57:34 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:45.221 09:57:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.221 09:57:34 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:45.221 09:57:34 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:08:45.221 09:57:34 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:08:45.221 09:57:34 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9786e8eb-11b6-4641-b2ef-279cd791e5cc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9786e8eb-11b6-4641-b2ef-279cd791e5cc",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "b0371108-3f53-4c85-860b-1800f53ac3bd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b0371108-3f53-4c85-860b-1800f53ac3bd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c64f30a9-1c96-4c3b-a8f4-c8ab03315dbe"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c64f30a9-1c96-4c3b-a8f4-c8ab03315dbe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8c966234-e320-4b38-8f88-0ce6fe59ac8f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8c966234-e320-4b38-8f88-0ce6fe59ac8f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "5af8e6e7-efb2-4aa9-ba86-d67bf7c0ca82"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5af8e6e7-efb2-4aa9-ba86-d67bf7c0ca82",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "99253ede-2690-40bf-ba53-16b6142418c6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "99253ede-2690-40bf-ba53-16b6142418c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:45.221 09:57:34 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:08:45.221 09:57:34 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:08:45.221 09:57:34 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:08:45.221 09:57:34 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 66262 00:08:45.221 09:57:34 blockdev_nvme -- common/autotest_common.sh@949 -- # '[' -z 66262 ']' 00:08:45.221 09:57:34 blockdev_nvme -- common/autotest_common.sh@953 -- # kill -0 66262 00:08:45.480 09:57:34 blockdev_nvme -- common/autotest_common.sh@954 -- # uname 00:08:45.480 09:57:34 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:45.480 09:57:34 blockdev_nvme -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 66262 00:08:45.480 killing process with pid 66262 00:08:45.480 09:57:34 blockdev_nvme -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:45.480 09:57:34 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:45.480 09:57:34 blockdev_nvme -- common/autotest_common.sh@967 -- # echo 'killing process with pid 66262' 00:08:45.480 09:57:34 blockdev_nvme -- common/autotest_common.sh@968 -- # kill 66262 00:08:45.480 09:57:34 blockdev_nvme -- common/autotest_common.sh@973 -- # wait 66262 00:08:47.381 09:57:36 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:47.381 09:57:36 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:47.381 09:57:36 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:47.381 09:57:36 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:47.381 09:57:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:47.381 ************************************ 00:08:47.381 START TEST bdev_hello_world 00:08:47.381 ************************************ 00:08:47.381 09:57:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:47.640 [2024-06-10 09:57:36.940042] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:47.640 [2024-06-10 09:57:36.940211] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66346 ] 00:08:47.640 [2024-06-10 09:57:37.113517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.898 [2024-06-10 09:57:37.378838] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.465 [2024-06-10 09:57:37.966600] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:48.465 [2024-06-10 09:57:37.966670] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:48.465 [2024-06-10 09:57:37.966700] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:48.465 [2024-06-10 09:57:37.969686] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:48.465 [2024-06-10 09:57:37.970193] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:48.465 [2024-06-10 09:57:37.970236] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:48.465 [2024-06-10 09:57:37.970435] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:48.465 00:08:48.465 [2024-06-10 09:57:37.970467] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:49.839 ************************************ 00:08:49.839 END TEST bdev_hello_world 00:08:49.839 ************************************ 00:08:49.839 00:08:49.839 real 0m2.134s 00:08:49.839 user 0m1.833s 00:08:49.839 sys 0m0.193s 00:08:49.839 09:57:38 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:49.839 09:57:38 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:49.839 09:57:39 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:08:49.839 09:57:39 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:49.839 09:57:39 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:49.839 09:57:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:49.839 ************************************ 00:08:49.839 START TEST bdev_bounds 00:08:49.839 ************************************ 00:08:49.839 Process bdevio pid: 66394 00:08:49.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.839 09:57:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # bdev_bounds '' 00:08:49.839 09:57:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=66394 00:08:49.839 09:57:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:49.839 09:57:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 66394' 00:08:49.839 09:57:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 66394 00:08:49.839 09:57:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:49.839 09:57:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@830 -- # '[' -z 66394 ']' 00:08:49.839 09:57:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.839 09:57:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:49.839 09:57:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.839 09:57:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:49.839 09:57:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:49.839 [2024-06-10 09:57:39.118016] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:49.840 [2024-06-10 09:57:39.118926] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66394 ] 00:08:49.840 [2024-06-10 09:57:39.291486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:50.097 [2024-06-10 09:57:39.481411] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.097 [2024-06-10 09:57:39.481504] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.097 [2024-06-10 09:57:39.481527] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.685 09:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:50.685 09:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@863 -- # return 0 00:08:50.685 09:57:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:50.943 I/O targets: 00:08:50.943 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:50.943 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:50.943 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:50.943 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:50.943 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:50.943 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:50.943 00:08:50.943 00:08:50.943 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.943 http://cunit.sourceforge.net/ 00:08:50.943 00:08:50.943 00:08:50.943 Suite: bdevio tests on: Nvme3n1 00:08:50.943 Test: blockdev write read block ...passed 00:08:50.943 Test: blockdev write zeroes read block ...passed 00:08:50.943 Test: blockdev write zeroes read no split ...passed 00:08:50.943 Test: blockdev write zeroes read split ...passed 00:08:50.943 Test: blockdev write zeroes read split partial ...passed 00:08:50.943 Test: blockdev reset ...[2024-06-10 09:57:40.297835] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:50.943 [2024-06-10 09:57:40.301579] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:50.943 passed 00:08:50.943 Test: blockdev write read 8 blocks ...passed 00:08:50.943 Test: blockdev write read size > 128k ...passed 00:08:50.943 Test: blockdev write read invalid size ...passed 00:08:50.943 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:50.943 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:50.943 Test: blockdev write read max offset ...passed 00:08:50.943 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:50.943 Test: blockdev writev readv 8 blocks ...passed 00:08:50.943 Test: blockdev writev readv 30 x 1block ...passed 00:08:50.943 Test: blockdev writev readv block ...passed 00:08:50.943 Test: blockdev writev readv size > 128k ...passed 00:08:50.943 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:50.943 Test: blockdev comparev and writev ...[2024-06-10 09:57:40.310391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27840e000 len:0x1000 00:08:50.943 [2024-06-10 09:57:40.310456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:50.943 passed 00:08:50.943 Test: blockdev nvme passthru rw ...passed 00:08:50.943 Test: blockdev nvme passthru vendor specific ...passed 00:08:50.943 Test: blockdev nvme admin passthru ...[2024-06-10 09:57:40.311307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:50.943 [2024-06-10 09:57:40.311361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:50.943 passed 00:08:50.943 Test: blockdev copy ...passed 00:08:50.943 Suite: bdevio tests on: Nvme2n3 00:08:50.943 Test: blockdev write read block ...passed 00:08:50.943 Test: blockdev write zeroes read block ...passed 00:08:50.943 Test: blockdev write zeroes read no split ...passed 00:08:50.943 Test: blockdev write zeroes read split ...passed 00:08:50.943 Test: blockdev write zeroes read split partial ...passed 00:08:50.943 Test: blockdev reset ...[2024-06-10 09:57:40.376367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:50.943 [2024-06-10 09:57:40.380180] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:50.943 passed 00:08:50.943 Test: blockdev write read 8 blocks ...passed 00:08:50.943 Test: blockdev write read size > 128k ...passed 00:08:50.943 Test: blockdev write read invalid size ...passed 00:08:50.943 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:50.943 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:50.943 Test: blockdev write read max offset ...passed 00:08:50.943 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:50.943 Test: blockdev writev readv 8 blocks ...passed 00:08:50.943 Test: blockdev writev readv 30 x 1block ...passed 00:08:50.943 Test: blockdev writev readv block ...passed 00:08:50.943 Test: blockdev writev readv size > 128k ...passed 00:08:50.943 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:50.943 Test: blockdev comparev and writev ...[2024-06-10 09:57:40.388310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27840a000 len:0x1000 00:08:50.943 [2024-06-10 09:57:40.388371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:50.943 passed 00:08:50.943 Test: blockdev nvme passthru rw ...passed 00:08:50.943 Test: blockdev nvme passthru vendor specific ...[2024-06-10 09:57:40.389403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:50.943 [2024-06-10 09:57:40.389448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:50.943 passed 00:08:50.943 Test: blockdev nvme admin passthru ...passed 00:08:50.943 Test: blockdev copy ...passed 00:08:50.943 Suite: bdevio tests on: Nvme2n2 00:08:50.943 Test: blockdev write read block ...passed 00:08:50.943 Test: blockdev write zeroes read block ...passed 00:08:50.944 Test: blockdev write zeroes read no split ...passed 00:08:50.944 Test: blockdev write zeroes read split ...passed 00:08:51.202 Test: blockdev write zeroes read split partial ...passed 00:08:51.202 Test: blockdev reset ...[2024-06-10 09:57:40.465263] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:51.202 [2024-06-10 09:57:40.469126] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:51.202 passed 00:08:51.202 Test: blockdev write read 8 blocks ...passed 00:08:51.202 Test: blockdev write read size > 128k ...passed 00:08:51.202 Test: blockdev write read invalid size ...passed 00:08:51.202 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:51.202 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:51.202 Test: blockdev write read max offset ...passed 00:08:51.202 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:51.202 Test: blockdev writev readv 8 blocks ...passed 00:08:51.202 Test: blockdev writev readv 30 x 1block ...passed 00:08:51.202 Test: blockdev writev readv block ...passed 00:08:51.202 Test: blockdev writev readv size > 128k ...passed 00:08:51.202 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:51.202 Test: blockdev comparev and writev ...[2024-06-10 09:57:40.481686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26ca06000 len:0x1000 00:08:51.202 [2024-06-10 09:57:40.481757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:51.202 passed 00:08:51.202 Test: blockdev nvme passthru rw ...passed 00:08:51.202 Test: blockdev nvme passthru vendor specific ...[2024-06-10 09:57:40.482665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:51.202 [2024-06-10 09:57:40.482711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:51.202 passed 00:08:51.202 Test: blockdev nvme admin passthru ...passed 00:08:51.202 Test: blockdev copy ...passed 00:08:51.202 Suite: bdevio tests on: Nvme2n1 00:08:51.202 Test: blockdev write read block ...passed 00:08:51.202 Test: blockdev write zeroes read block ...passed 00:08:51.202 Test: blockdev write zeroes read no split ...passed 00:08:51.202 Test: blockdev write zeroes read split ...passed 00:08:51.202 Test: blockdev write zeroes read split partial ...passed 00:08:51.202 Test: blockdev reset ...[2024-06-10 09:57:40.553882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:51.202 [2024-06-10 09:57:40.557727] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:51.202 passed 00:08:51.202 Test: blockdev write read 8 blocks ...passed 00:08:51.202 Test: blockdev write read size > 128k ...passed 00:08:51.202 Test: blockdev write read invalid size ...passed 00:08:51.202 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:51.202 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:51.202 Test: blockdev write read max offset ...passed 00:08:51.202 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:51.202 Test: blockdev writev readv 8 blocks ...passed 00:08:51.202 Test: blockdev writev readv 30 x 1block ...passed 00:08:51.202 Test: blockdev writev readv block ...passed 00:08:51.202 Test: blockdev writev readv size > 128k ...passed 00:08:51.202 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:51.202 Test: blockdev comparev and writev ...[2024-06-10 09:57:40.566530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26ca01000 len:0x1000 00:08:51.202 [2024-06-10 09:57:40.566593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:51.202 passed 00:08:51.202 Test: blockdev nvme passthru rw ...passed 00:08:51.202 Test: blockdev nvme passthru vendor specific ...[2024-06-10 09:57:40.567404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:51.202 [2024-06-10 09:57:40.567449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:51.202 passed 00:08:51.202 Test: blockdev nvme admin passthru ...passed 00:08:51.202 Test: blockdev copy ...passed 00:08:51.202 Suite: bdevio tests on: Nvme1n1 00:08:51.202 Test: blockdev write read block ...passed 00:08:51.202 Test: blockdev write zeroes read block ...passed 00:08:51.202 Test: blockdev write zeroes read no split ...passed 00:08:51.202 Test: blockdev write zeroes read split ...passed 00:08:51.202 Test: blockdev write zeroes read split partial ...passed 00:08:51.202 Test: blockdev reset ...[2024-06-10 09:57:40.642278] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:51.202 [2024-06-10 09:57:40.645736] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:51.202 passed 00:08:51.202 Test: blockdev write read 8 blocks ...passed 00:08:51.202 Test: blockdev write read size > 128k ...passed 00:08:51.202 Test: blockdev write read invalid size ...passed 00:08:51.202 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:51.202 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:51.202 Test: blockdev write read max offset ...passed 00:08:51.202 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:51.202 Test: blockdev writev readv 8 blocks ...passed 00:08:51.202 Test: blockdev writev readv 30 x 1block ...passed 00:08:51.202 Test: blockdev writev readv block ...passed 00:08:51.202 Test: blockdev writev readv size > 128k ...passed 00:08:51.202 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:51.202 Test: blockdev comparev and writev ...[2024-06-10 09:57:40.654670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27c606000 len:0x1000 00:08:51.202 [2024-06-10 09:57:40.654731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:51.202 passed 00:08:51.202 Test: blockdev nvme passthru rw ...passed 00:08:51.202 Test: blockdev nvme passthru vendor specific ...passed 00:08:51.202 Test: blockdev nvme admin passthru ...[2024-06-10 09:57:40.655467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:51.202 [2024-06-10 09:57:40.655518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:51.202 passed 00:08:51.202 Test: blockdev copy ...passed 00:08:51.202 Suite: bdevio tests on: Nvme0n1 00:08:51.202 Test: blockdev write read block ...passed 00:08:51.202 Test: blockdev write zeroes read block ...passed 00:08:51.202 Test: blockdev write zeroes read no split ...passed 00:08:51.202 Test: blockdev write zeroes read split ...passed 00:08:51.460 Test: blockdev write zeroes read split partial ...passed 00:08:51.460 Test: blockdev reset ...[2024-06-10 09:57:40.728024] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:51.460 [2024-06-10 09:57:40.731547] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:51.460 passed 00:08:51.460 Test: blockdev write read 8 blocks ...passed 00:08:51.460 Test: blockdev write read size > 128k ...passed 00:08:51.460 Test: blockdev write read invalid size ...passed 00:08:51.460 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:51.460 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:51.460 Test: blockdev write read max offset ...passed 00:08:51.460 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:51.460 Test: blockdev writev readv 8 blocks ...passed 00:08:51.460 Test: blockdev writev readv 30 x 1block ...passed 00:08:51.460 Test: blockdev writev readv block ...passed 00:08:51.460 Test: blockdev writev readv size > 128k ...passed 00:08:51.460 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:51.460 Test: blockdev comparev and writev ...passed 00:08:51.460 Test: blockdev nvme passthru rw ...[2024-06-10 09:57:40.739362] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:51.460 separate metadata which is not supported yet. 00:08:51.460 passed 00:08:51.460 Test: blockdev nvme passthru vendor specific ...passed 00:08:51.460 Test: blockdev nvme admin passthru ...[2024-06-10 09:57:40.739903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:51.460 [2024-06-10 09:57:40.739965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:51.460 passed 00:08:51.460 Test: blockdev copy ...passed 00:08:51.460 00:08:51.460 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.460 suites 6 6 n/a 0 0 00:08:51.460 tests 138 138 138 0 0 00:08:51.460 asserts 893 893 893 0 n/a 00:08:51.460 00:08:51.460 Elapsed time = 1.398 seconds 00:08:51.460 0 00:08:51.460 09:57:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 66394 00:08:51.460 09:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@949 -- # '[' -z 66394 ']' 00:08:51.460 09:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # kill -0 66394 00:08:51.460 09:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # uname 00:08:51.460 09:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:51.460 09:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 66394 00:08:51.460 09:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:51.460 09:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:51.460 09:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # echo 'killing process with pid 66394' 00:08:51.460 killing process with pid 66394 00:08:51.460 09:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # kill 66394 00:08:51.460 09:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # wait 66394 00:08:52.395 09:57:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:08:52.395 00:08:52.395 real 0m2.725s 00:08:52.395 user 0m6.794s 00:08:52.395 sys 0m0.332s 00:08:52.395 09:57:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:52.395 ************************************ 00:08:52.395 END TEST bdev_bounds 00:08:52.395 ************************************ 00:08:52.395 09:57:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:52.395 09:57:41 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:52.395 09:57:41 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:08:52.395 09:57:41 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:52.395 09:57:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.395 ************************************ 00:08:52.395 START TEST bdev_nbd 00:08:52.395 ************************************ 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=66453 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 66453 /var/tmp/spdk-nbd.sock 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@830 -- # '[' -z 66453 ']' 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:52.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:52.395 09:57:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:52.395 [2024-06-10 09:57:41.904599] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:52.395 [2024-06-10 09:57:41.904783] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.653 [2024-06-10 09:57:42.077242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.910 [2024-06-10 09:57:42.339673] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.475 09:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:53.475 09:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@863 -- # return 0 00:08:53.475 09:57:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:53.475 09:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.475 09:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:53.475 09:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:53.475 09:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:53.475 09:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.475 09:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:53.475 09:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:53.475 09:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:53.475 09:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:53.475 09:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:53.475 09:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:53.475 09:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:54.041 1+0 records in 00:08:54.041 1+0 records out 00:08:54.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440048 s, 9.3 MB/s 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:54.041 1+0 records in 00:08:54.041 1+0 records out 00:08:54.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578754 s, 7.1 MB/s 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:54.041 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:54.299 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:54.299 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd2 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd2 /proc/partitions 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:54.557 1+0 records in 00:08:54.557 1+0 records out 00:08:54.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605062 s, 6.8 MB/s 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:54.557 09:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd3 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd3 /proc/partitions 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:54.815 1+0 records in 00:08:54.815 1+0 records out 00:08:54.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572271 s, 7.2 MB/s 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:54.815 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd4 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd4 /proc/partitions 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:55.073 1+0 records in 00:08:55.073 1+0 records out 00:08:55.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000837749 s, 4.9 MB/s 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:55.073 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:55.333 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:55.333 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:55.333 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:55.333 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd5 00:08:55.333 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:08:55.334 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:55.334 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:55.334 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd5 /proc/partitions 00:08:55.334 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:08:55.334 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:55.334 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:55.334 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:55.334 1+0 records in 00:08:55.334 1+0 records out 00:08:55.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101387 s, 4.0 MB/s 00:08:55.334 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:55.334 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:08:55.334 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:55.334 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:55.334 09:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:08:55.334 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:55.334 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:55.334 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:55.591 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:55.591 { 00:08:55.591 "nbd_device": "/dev/nbd0", 00:08:55.592 "bdev_name": "Nvme0n1" 00:08:55.592 }, 00:08:55.592 { 00:08:55.592 "nbd_device": "/dev/nbd1", 00:08:55.592 "bdev_name": "Nvme1n1" 00:08:55.592 }, 00:08:55.592 { 00:08:55.592 "nbd_device": "/dev/nbd2", 00:08:55.592 "bdev_name": "Nvme2n1" 00:08:55.592 }, 00:08:55.592 { 00:08:55.592 "nbd_device": "/dev/nbd3", 00:08:55.592 "bdev_name": "Nvme2n2" 00:08:55.592 }, 00:08:55.592 { 00:08:55.592 "nbd_device": "/dev/nbd4", 00:08:55.592 "bdev_name": "Nvme2n3" 00:08:55.592 }, 00:08:55.592 { 00:08:55.592 "nbd_device": "/dev/nbd5", 00:08:55.592 "bdev_name": "Nvme3n1" 00:08:55.592 } 00:08:55.592 ]' 00:08:55.592 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:55.592 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:55.592 { 00:08:55.592 "nbd_device": "/dev/nbd0", 00:08:55.592 "bdev_name": "Nvme0n1" 00:08:55.592 }, 00:08:55.592 { 00:08:55.592 "nbd_device": "/dev/nbd1", 00:08:55.592 "bdev_name": "Nvme1n1" 00:08:55.592 }, 00:08:55.592 { 00:08:55.592 "nbd_device": "/dev/nbd2", 00:08:55.592 "bdev_name": "Nvme2n1" 00:08:55.592 }, 00:08:55.592 { 00:08:55.592 "nbd_device": "/dev/nbd3", 00:08:55.592 "bdev_name": "Nvme2n2" 00:08:55.592 }, 00:08:55.592 { 00:08:55.592 "nbd_device": "/dev/nbd4", 00:08:55.592 "bdev_name": "Nvme2n3" 00:08:55.592 }, 00:08:55.592 { 00:08:55.592 "nbd_device": "/dev/nbd5", 00:08:55.592 "bdev_name": "Nvme3n1" 00:08:55.592 } 00:08:55.592 ]' 00:08:55.592 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:55.592 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:55.592 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.592 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:55.592 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:55.592 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:55.592 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.592 09:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:55.849 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:55.849 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:55.849 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:55.849 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.849 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.849 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:55.849 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:55.849 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.849 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.849 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:56.107 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:56.107 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:56.107 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:56.107 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.107 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.107 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:56.107 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:56.107 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.107 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.107 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:56.366 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:56.366 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:56.366 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:56.366 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.366 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.366 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:56.366 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:56.366 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.366 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.366 09:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:56.624 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:56.624 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:56.624 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:56.624 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.624 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.624 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:56.624 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:56.624 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.624 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.624 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:56.883 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:56.883 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:56.883 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:56.883 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.883 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.883 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:56.883 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:56.883 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.883 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.883 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:57.448 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:57.448 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:57.448 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:57.448 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:57.448 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:57.448 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:57.448 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:57.448 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:57.448 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:57.448 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.448 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:57.706 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:57.706 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:57.706 09:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:57.706 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:57.706 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:57.707 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:57.965 /dev/nbd0 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:57.965 1+0 records in 00:08:57.965 1+0 records out 00:08:57.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598096 s, 6.8 MB/s 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:57.965 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:58.224 /dev/nbd1 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.224 1+0 records in 00:08:58.224 1+0 records out 00:08:58.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707733 s, 5.8 MB/s 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:58.224 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:58.482 /dev/nbd10 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd10 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd10 /proc/partitions 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.482 1+0 records in 00:08:58.482 1+0 records out 00:08:58.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601682 s, 6.8 MB/s 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:58.482 09:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:58.741 /dev/nbd11 00:08:58.999 09:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:58.999 09:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:58.999 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd11 00:08:58.999 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:08:58.999 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:58.999 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:58.999 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd11 /proc/partitions 00:08:58.999 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:08:59.000 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:59.000 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:59.000 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.000 1+0 records in 00:08:59.000 1+0 records out 00:08:59.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00087384 s, 4.7 MB/s 00:08:59.000 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.000 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:08:59.000 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.000 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:59.000 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:08:59.000 09:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:59.000 09:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:59.000 09:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:59.258 /dev/nbd12 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd12 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd12 /proc/partitions 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.258 1+0 records in 00:08:59.258 1+0 records out 00:08:59.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000779288 s, 5.3 MB/s 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:59.258 09:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:59.517 /dev/nbd13 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd13 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd13 /proc/partitions 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.517 1+0 records in 00:08:59.517 1+0 records out 00:08:59.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000844064 s, 4.9 MB/s 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.517 09:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:59.775 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:59.776 { 00:08:59.776 "nbd_device": "/dev/nbd0", 00:08:59.776 "bdev_name": "Nvme0n1" 00:08:59.776 }, 00:08:59.776 { 00:08:59.776 "nbd_device": "/dev/nbd1", 00:08:59.776 "bdev_name": "Nvme1n1" 00:08:59.776 }, 00:08:59.776 { 00:08:59.776 "nbd_device": "/dev/nbd10", 00:08:59.776 "bdev_name": "Nvme2n1" 00:08:59.776 }, 00:08:59.776 { 00:08:59.776 "nbd_device": "/dev/nbd11", 00:08:59.776 "bdev_name": "Nvme2n2" 00:08:59.776 }, 00:08:59.776 { 00:08:59.776 "nbd_device": "/dev/nbd12", 00:08:59.776 "bdev_name": "Nvme2n3" 00:08:59.776 }, 00:08:59.776 { 00:08:59.776 "nbd_device": "/dev/nbd13", 00:08:59.776 "bdev_name": "Nvme3n1" 00:08:59.776 } 00:08:59.776 ]' 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:59.776 { 00:08:59.776 "nbd_device": "/dev/nbd0", 00:08:59.776 "bdev_name": "Nvme0n1" 00:08:59.776 }, 00:08:59.776 { 00:08:59.776 "nbd_device": "/dev/nbd1", 00:08:59.776 "bdev_name": "Nvme1n1" 00:08:59.776 }, 00:08:59.776 { 00:08:59.776 "nbd_device": "/dev/nbd10", 00:08:59.776 "bdev_name": "Nvme2n1" 00:08:59.776 }, 00:08:59.776 { 00:08:59.776 "nbd_device": "/dev/nbd11", 00:08:59.776 "bdev_name": "Nvme2n2" 00:08:59.776 }, 00:08:59.776 { 00:08:59.776 "nbd_device": "/dev/nbd12", 00:08:59.776 "bdev_name": "Nvme2n3" 00:08:59.776 }, 00:08:59.776 { 00:08:59.776 "nbd_device": "/dev/nbd13", 00:08:59.776 "bdev_name": "Nvme3n1" 00:08:59.776 } 00:08:59.776 ]' 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:59.776 /dev/nbd1 00:08:59.776 /dev/nbd10 00:08:59.776 /dev/nbd11 00:08:59.776 /dev/nbd12 00:08:59.776 /dev/nbd13' 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:59.776 /dev/nbd1 00:08:59.776 /dev/nbd10 00:08:59.776 /dev/nbd11 00:08:59.776 /dev/nbd12 00:08:59.776 /dev/nbd13' 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:59.776 256+0 records in 00:08:59.776 256+0 records out 00:08:59.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00742506 s, 141 MB/s 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:59.776 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:00.034 256+0 records in 00:09:00.034 256+0 records out 00:09:00.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145691 s, 7.2 MB/s 00:09:00.034 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.034 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:00.034 256+0 records in 00:09:00.034 256+0 records out 00:09:00.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138236 s, 7.6 MB/s 00:09:00.034 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.034 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:00.292 256+0 records in 00:09:00.292 256+0 records out 00:09:00.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159914 s, 6.6 MB/s 00:09:00.292 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.292 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:00.550 256+0 records in 00:09:00.550 256+0 records out 00:09:00.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156358 s, 6.7 MB/s 00:09:00.550 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.550 09:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:00.550 256+0 records in 00:09:00.550 256+0 records out 00:09:00.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135329 s, 7.7 MB/s 00:09:00.550 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.550 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:00.808 256+0 records in 00:09:00.808 256+0 records out 00:09:00.808 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152562 s, 6.9 MB/s 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.808 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:01.067 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:01.067 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:01.067 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:01.067 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.067 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.067 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:01.067 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:01.067 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.067 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.067 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:01.326 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:01.326 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:01.326 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:01.326 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.326 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.326 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:01.326 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:01.326 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.326 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.326 09:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:01.584 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:01.584 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:01.584 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:01.584 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.584 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.584 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:01.584 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:01.584 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.584 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.584 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:01.842 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:02.100 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:02.100 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:02.100 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.100 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.100 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:02.100 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:02.100 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.100 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.100 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:02.100 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:02.357 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:02.357 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:02.357 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.357 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.357 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:02.357 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:02.357 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.357 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.357 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:02.357 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:02.357 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:02.357 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:02.357 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.357 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.357 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:02.357 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:02.357 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.615 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:02.615 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.615 09:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:02.873 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:03.131 malloc_lvol_verify 00:09:03.131 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:03.389 76e10b66-f7d4-4a47-94e8-ec255cd9e8ad 00:09:03.389 09:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:03.647 9ad6ec9b-a67a-4773-ac3a-b28e62a0e1fa 00:09:03.647 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:03.906 /dev/nbd0 00:09:03.906 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:03.906 mke2fs 1.46.5 (30-Dec-2021) 00:09:03.906 Discarding device blocks: 0/4096 done 00:09:03.906 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:03.906 00:09:03.906 Allocating group tables: 0/1 done 00:09:03.906 Writing inode tables: 0/1 done 00:09:03.906 Creating journal (1024 blocks): done 00:09:03.906 Writing superblocks and filesystem accounting information: 0/1 done 00:09:03.906 00:09:03.906 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:03.906 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:03.906 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.906 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:03.906 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:03.906 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:03.906 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.906 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:04.165 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:04.165 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:04.165 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:04.165 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.165 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.165 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:04.165 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:04.165 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.165 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:04.165 09:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:04.165 09:57:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 66453 00:09:04.165 09:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@949 -- # '[' -z 66453 ']' 00:09:04.165 09:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # kill -0 66453 00:09:04.165 09:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # uname 00:09:04.165 09:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:04.165 09:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 66453 00:09:04.424 killing process with pid 66453 00:09:04.424 09:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:04.424 09:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:04.424 09:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # echo 'killing process with pid 66453' 00:09:04.424 09:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # kill 66453 00:09:04.424 09:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # wait 66453 00:09:05.800 09:57:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:09:05.800 00:09:05.800 real 0m13.078s 00:09:05.800 user 0m18.794s 00:09:05.800 sys 0m4.006s 00:09:05.800 09:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:05.800 09:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:05.800 ************************************ 00:09:05.800 END TEST bdev_nbd 00:09:05.800 ************************************ 00:09:05.800 09:57:54 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:09:05.800 skipping fio tests on NVMe due to multi-ns failures. 00:09:05.800 09:57:54 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:09:05.800 09:57:54 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:05.800 09:57:54 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:05.800 09:57:54 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:05.800 09:57:54 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:09:05.800 09:57:54 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:05.800 09:57:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.800 ************************************ 00:09:05.800 START TEST bdev_verify 00:09:05.800 ************************************ 00:09:05.800 09:57:54 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:05.800 [2024-06-10 09:57:55.044477] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:09:05.800 [2024-06-10 09:57:55.044946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66861 ] 00:09:05.800 [2024-06-10 09:57:55.228804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:06.059 [2024-06-10 09:57:55.462377] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.059 [2024-06-10 09:57:55.462383] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.625 Running I/O for 5 seconds... 00:09:11.894 00:09:11.894 Latency(us) 00:09:11.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.894 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:11.894 Verification LBA range: start 0x0 length 0xbd0bd 00:09:11.894 Nvme0n1 : 5.05 1572.88 6.14 0.00 0.00 81107.48 14656.23 79596.45 00:09:11.894 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:11.894 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:11.894 Nvme0n1 : 5.04 1499.58 5.86 0.00 0.00 85049.99 15728.64 121062.87 00:09:11.894 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:11.894 Verification LBA range: start 0x0 length 0xa0000 00:09:11.894 Nvme1n1 : 5.05 1572.39 6.14 0.00 0.00 80954.96 16920.20 68634.07 00:09:11.894 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:11.894 Verification LBA range: start 0xa0000 length 0xa0000 00:09:11.894 Nvme1n1 : 5.06 1505.91 5.88 0.00 0.00 84544.68 6494.02 109147.23 00:09:11.894 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:11.894 Verification LBA range: start 0x0 length 0x80000 00:09:11.894 Nvme2n1 : 5.05 1571.92 6.14 0.00 0.00 80839.80 15847.80 66250.94 00:09:11.894 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:11.894 Verification LBA range: start 0x80000 length 0x80000 00:09:11.894 Nvme2n1 : 5.06 1505.20 5.88 0.00 0.00 84374.19 7477.06 111053.73 00:09:11.894 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:11.894 Verification LBA range: start 0x0 length 0x80000 00:09:11.894 Nvme2n2 : 5.06 1580.06 6.17 0.00 0.00 80286.40 4081.11 68634.07 00:09:11.894 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:11.894 Verification LBA range: start 0x80000 length 0x80000 00:09:11.894 Nvme2n2 : 5.07 1514.71 5.92 0.00 0.00 83834.02 7804.74 115819.99 00:09:11.894 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:11.894 Verification LBA range: start 0x0 length 0x80000 00:09:11.894 Nvme2n3 : 5.07 1589.77 6.21 0.00 0.00 79719.23 6583.39 71493.82 00:09:11.894 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:11.894 Verification LBA range: start 0x80000 length 0x80000 00:09:11.894 Nvme2n3 : 5.07 1514.26 5.92 0.00 0.00 83675.18 7923.90 121062.87 00:09:11.894 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:11.894 Verification LBA range: start 0x0 length 0x20000 00:09:11.895 Nvme3n1 : 5.07 1589.10 6.21 0.00 0.00 79574.89 7536.64 73876.95 00:09:11.895 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:11.895 Verification LBA range: start 0x20000 length 0x20000 00:09:11.895 Nvme3n1 : 5.07 1513.66 5.91 0.00 0.00 83518.92 8757.99 125829.12 00:09:11.895 =================================================================================================================== 00:09:11.895 Total : 18529.43 72.38 0.00 0.00 82243.93 4081.11 125829.12 00:09:13.267 00:09:13.267 real 0m7.668s 00:09:13.267 user 0m13.900s 00:09:13.267 sys 0m0.284s 00:09:13.267 09:58:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:13.267 ************************************ 00:09:13.267 END TEST bdev_verify 00:09:13.267 ************************************ 00:09:13.267 09:58:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:13.267 09:58:02 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:13.267 09:58:02 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:09:13.267 09:58:02 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:13.267 09:58:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:13.267 ************************************ 00:09:13.267 START TEST bdev_verify_big_io 00:09:13.267 ************************************ 00:09:13.267 09:58:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:13.267 [2024-06-10 09:58:02.740788] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:09:13.267 [2024-06-10 09:58:02.740964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66965 ] 00:09:13.525 [2024-06-10 09:58:02.903124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:13.782 [2024-06-10 09:58:03.089612] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.782 [2024-06-10 09:58:03.089622] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.348 Running I/O for 5 seconds... 00:09:20.907 00:09:20.907 Latency(us) 00:09:20.907 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.907 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:20.907 Verification LBA range: start 0x0 length 0xbd0b 00:09:20.907 Nvme0n1 : 5.88 118.08 7.38 0.00 0.00 1028368.95 13107.20 1517575.45 00:09:20.907 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:20.907 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:20.907 Nvme0n1 : 5.76 122.13 7.63 0.00 0.00 1009427.30 18945.86 1044763.00 00:09:20.907 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:20.907 Verification LBA range: start 0x0 length 0xa000 00:09:20.907 Nvme1n1 : 5.88 118.22 7.39 0.00 0.00 995089.02 30742.34 1540453.47 00:09:20.907 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:20.907 Verification LBA range: start 0xa000 length 0xa000 00:09:20.907 Nvme1n1 : 5.88 126.31 7.89 0.00 0.00 951314.90 62914.56 869364.83 00:09:20.907 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:20.907 Verification LBA range: start 0x0 length 0x8000 00:09:20.907 Nvme2n1 : 5.88 127.51 7.97 0.00 0.00 903483.59 52905.43 1052389.00 00:09:20.907 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:20.907 Verification LBA range: start 0x8000 length 0x8000 00:09:20.907 Nvme2n1 : 5.88 125.91 7.87 0.00 0.00 923684.37 64344.44 869364.83 00:09:20.907 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:20.907 Verification LBA range: start 0x0 length 0x8000 00:09:20.907 Nvme2n2 : 5.93 125.53 7.85 0.00 0.00 894166.65 40036.54 1616713.54 00:09:20.907 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:20.907 Verification LBA range: start 0x8000 length 0x8000 00:09:20.907 Nvme2n2 : 5.89 130.43 8.15 0.00 0.00 873332.83 50760.61 896055.85 00:09:20.907 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:20.907 Verification LBA range: start 0x0 length 0x8000 00:09:20.907 Nvme2n3 : 5.98 131.55 8.22 0.00 0.00 826797.62 26333.56 1639591.56 00:09:20.907 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:20.907 Verification LBA range: start 0x8000 length 0x8000 00:09:20.907 Nvme2n3 : 5.95 133.54 8.35 0.00 0.00 824448.13 61008.06 926559.88 00:09:20.907 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:20.907 Verification LBA range: start 0x0 length 0x2000 00:09:20.907 Nvme3n1 : 5.99 146.30 9.14 0.00 0.00 725835.18 1370.30 1654843.58 00:09:20.907 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:20.907 Verification LBA range: start 0x2000 length 0x2000 00:09:20.907 Nvme3n1 : 5.97 145.70 9.11 0.00 0.00 737641.78 5421.61 1128649.08 00:09:20.907 =================================================================================================================== 00:09:20.907 Total : 1551.22 96.95 0.00 0.00 884076.45 1370.30 1654843.58 00:09:22.283 00:09:22.283 real 0m8.873s 00:09:22.283 user 0m16.401s 00:09:22.283 sys 0m0.269s 00:09:22.283 09:58:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:22.283 09:58:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:22.283 ************************************ 00:09:22.283 END TEST bdev_verify_big_io 00:09:22.283 ************************************ 00:09:22.283 09:58:11 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:22.283 09:58:11 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:09:22.283 09:58:11 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:22.283 09:58:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.283 ************************************ 00:09:22.283 START TEST bdev_write_zeroes 00:09:22.283 ************************************ 00:09:22.283 09:58:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:22.283 [2024-06-10 09:58:11.668509] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:09:22.283 [2024-06-10 09:58:11.668677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67074 ] 00:09:22.541 [2024-06-10 09:58:11.831124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.541 [2024-06-10 09:58:12.027416] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.474 Running I/O for 1 seconds... 00:09:24.406 00:09:24.406 Latency(us) 00:09:24.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.406 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:24.406 Nvme0n1 : 1.02 9268.56 36.21 0.00 0.00 13760.16 10902.81 25261.15 00:09:24.406 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:24.406 Nvme1n1 : 1.02 9254.07 36.15 0.00 0.00 13761.83 11379.43 26452.71 00:09:24.406 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:24.406 Nvme2n1 : 1.02 9239.98 36.09 0.00 0.00 13734.51 10962.39 25380.31 00:09:24.406 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:24.406 Nvme2n2 : 1.02 9275.44 36.23 0.00 0.00 13664.52 9055.88 25499.46 00:09:24.406 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:24.406 Nvme2n3 : 1.02 9261.60 36.18 0.00 0.00 13651.70 9413.35 25261.15 00:09:24.406 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:24.406 Nvme3n1 : 1.02 9247.79 36.12 0.00 0.00 13639.43 9234.62 25261.15 00:09:24.406 =================================================================================================================== 00:09:24.406 Total : 55547.45 216.98 0.00 0.00 13701.85 9055.88 26452.71 00:09:25.782 00:09:25.782 real 0m3.359s 00:09:25.782 user 0m3.012s 00:09:25.782 sys 0m0.222s 00:09:25.782 09:58:14 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:25.782 09:58:14 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:25.782 ************************************ 00:09:25.782 END TEST bdev_write_zeroes 00:09:25.782 ************************************ 00:09:25.782 09:58:14 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:25.782 09:58:14 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:09:25.783 09:58:14 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:25.783 09:58:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.783 ************************************ 00:09:25.783 START TEST bdev_json_nonenclosed 00:09:25.783 ************************************ 00:09:25.783 09:58:14 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:25.783 [2024-06-10 09:58:15.084575] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:09:25.783 [2024-06-10 09:58:15.084761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67137 ] 00:09:25.783 [2024-06-10 09:58:15.254912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.041 [2024-06-10 09:58:15.468151] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.041 [2024-06-10 09:58:15.468262] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:26.041 [2024-06-10 09:58:15.468293] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:26.041 [2024-06-10 09:58:15.468309] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:26.608 00:09:26.608 real 0m0.894s 00:09:26.608 user 0m0.676s 00:09:26.608 sys 0m0.112s 00:09:26.608 09:58:15 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:26.608 09:58:15 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:26.608 ************************************ 00:09:26.608 END TEST bdev_json_nonenclosed 00:09:26.608 ************************************ 00:09:26.608 09:58:15 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:26.608 09:58:15 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:09:26.608 09:58:15 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:26.608 09:58:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:26.608 ************************************ 00:09:26.608 START TEST bdev_json_nonarray 00:09:26.608 ************************************ 00:09:26.608 09:58:15 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:26.608 [2024-06-10 09:58:16.027244] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:09:26.608 [2024-06-10 09:58:16.027403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67164 ] 00:09:26.867 [2024-06-10 09:58:16.198148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.125 [2024-06-10 09:58:16.423536] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.125 [2024-06-10 09:58:16.423672] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:27.125 [2024-06-10 09:58:16.423703] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:27.125 [2024-06-10 09:58:16.423719] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:27.384 00:09:27.384 real 0m0.918s 00:09:27.384 user 0m0.680s 00:09:27.384 sys 0m0.131s 00:09:27.384 09:58:16 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:27.384 09:58:16 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:27.384 ************************************ 00:09:27.384 END TEST bdev_json_nonarray 00:09:27.384 ************************************ 00:09:27.384 09:58:16 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:09:27.384 09:58:16 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:09:27.384 09:58:16 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:09:27.384 09:58:16 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:27.384 09:58:16 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:09:27.384 09:58:16 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:27.384 09:58:16 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:27.384 09:58:16 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:09:27.384 09:58:16 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:09:27.384 09:58:16 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:09:27.384 09:58:16 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:09:27.384 00:09:27.384 real 0m43.980s 00:09:27.384 user 1m6.366s 00:09:27.384 sys 0m6.330s 00:09:27.384 09:58:16 blockdev_nvme -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:27.384 09:58:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.384 ************************************ 00:09:27.384 END TEST blockdev_nvme 00:09:27.384 ************************************ 00:09:27.643 09:58:16 -- spdk/autotest.sh@213 -- # uname -s 00:09:27.643 09:58:16 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:09:27.643 09:58:16 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:27.643 09:58:16 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:27.643 09:58:16 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:27.643 09:58:16 -- common/autotest_common.sh@10 -- # set +x 00:09:27.643 ************************************ 00:09:27.643 START TEST blockdev_nvme_gpt 00:09:27.643 ************************************ 00:09:27.643 09:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:27.643 * Looking for test storage... 00:09:27.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67240 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 67240 00:09:27.643 09:58:17 blockdev_nvme_gpt -- common/autotest_common.sh@830 -- # '[' -z 67240 ']' 00:09:27.643 09:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:27.643 09:58:17 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.643 09:58:17 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:27.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.643 09:58:17 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.643 09:58:17 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:27.643 09:58:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:27.643 [2024-06-10 09:58:17.144659] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:09:27.643 [2024-06-10 09:58:17.144832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67240 ] 00:09:27.902 [2024-06-10 09:58:17.320481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.160 [2024-06-10 09:58:17.551718] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.096 09:58:18 blockdev_nvme_gpt -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:29.096 09:58:18 blockdev_nvme_gpt -- common/autotest_common.sh@863 -- # return 0 00:09:29.096 09:58:18 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:09:29.096 09:58:18 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:09:29.096 09:58:18 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:29.096 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:29.354 Waiting for block devices as requested 00:09:29.354 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.613 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.613 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.613 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.879 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:34.879 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local nvme bdf 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n1 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n1 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # local device=nvme2n1 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n2 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # local device=nvme2n2 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:09:34.879 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n3 00:09:34.880 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # local device=nvme2n3 00:09:34.880 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:34.880 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:09:34.880 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:09:34.880 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # is_block_zoned nvme3c3n1 00:09:34.880 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # local device=nvme3c3n1 00:09:34.880 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:34.880 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:09:34.880 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:09:34.880 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # is_block_zoned nvme3n1 00:09:34.880 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # local device=nvme3n1 00:09:34.880 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:34.880 09:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:09:34.880 BYT; 00:09:34.880 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:09:34.880 BYT; 00:09:34.880 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:34.880 09:58:24 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:34.880 09:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:09:35.813 The operation has completed successfully. 00:09:35.814 09:58:25 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:09:37.188 The operation has completed successfully. 00:09:37.188 09:58:26 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:37.447 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:38.015 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.015 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.015 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.015 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.015 09:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:09:38.273 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:38.273 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:38.273 [] 00:09:38.273 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:38.273 09:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:09:38.273 09:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:38.273 09:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:38.273 09:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:38.273 09:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:38.273 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:38.273 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:38.532 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:38.532 09:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:09:38.533 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:38.533 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:38.533 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:38.533 09:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:09:38.533 09:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:09:38.533 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:38.533 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:38.533 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:38.533 09:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:09:38.533 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:38.533 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:38.533 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:38.533 09:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:38.533 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:38.533 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:38.533 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:38.533 09:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:09:38.533 09:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:09:38.533 09:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:09:38.533 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:38.533 09:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:38.533 09:58:28 blockdev_nvme_gpt -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:38.792 09:58:28 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:09:38.792 09:58:28 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:09:38.793 09:58:28 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "e2ceab5d-a2a4-4f24-b425-80696044eb1f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e2ceab5d-a2a4-4f24-b425-80696044eb1f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "d25c0212-c221-4551-96bf-402049a9de2c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d25c0212-c221-4551-96bf-402049a9de2c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "fec7707d-f458-4532-9026-88e2c7098d0d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fec7707d-f458-4532-9026-88e2c7098d0d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f16e4d33-aba0-4b22-9702-061cca6c90c3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f16e4d33-aba0-4b22-9702-061cca6c90c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "eefeaacb-5c79-4ff4-9b15-1aa9afe81a50"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "eefeaacb-5c79-4ff4-9b15-1aa9afe81a50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:38.793 09:58:28 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:09:38.793 09:58:28 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:09:38.793 09:58:28 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:09:38.793 09:58:28 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 67240 00:09:38.793 09:58:28 blockdev_nvme_gpt -- common/autotest_common.sh@949 -- # '[' -z 67240 ']' 00:09:38.793 09:58:28 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # kill -0 67240 00:09:38.793 09:58:28 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # uname 00:09:38.793 09:58:28 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:38.793 09:58:28 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 67240 00:09:38.793 killing process with pid 67240 00:09:38.793 09:58:28 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:38.793 09:58:28 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:38.793 09:58:28 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # echo 'killing process with pid 67240' 00:09:38.793 09:58:28 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # kill 67240 00:09:38.793 09:58:28 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # wait 67240 00:09:41.328 09:58:30 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:41.328 09:58:30 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:41.328 09:58:30 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:09:41.328 09:58:30 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:41.328 09:58:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:41.328 ************************************ 00:09:41.328 START TEST bdev_hello_world 00:09:41.328 ************************************ 00:09:41.328 09:58:30 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:41.328 [2024-06-10 09:58:30.356895] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:09:41.328 [2024-06-10 09:58:30.357105] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67878 ] 00:09:41.328 [2024-06-10 09:58:30.536370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.328 [2024-06-10 09:58:30.732598] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.896 [2024-06-10 09:58:31.327456] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:41.896 [2024-06-10 09:58:31.327718] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:09:41.896 [2024-06-10 09:58:31.327756] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:41.896 [2024-06-10 09:58:31.330752] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:41.896 [2024-06-10 09:58:31.331276] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:41.896 [2024-06-10 09:58:31.331319] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:41.896 [2024-06-10 09:58:31.331490] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:41.896 00:09:41.896 [2024-06-10 09:58:31.331533] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:43.272 00:09:43.272 real 0m2.221s 00:09:43.272 user 0m1.880s 00:09:43.272 sys 0m0.231s 00:09:43.272 09:58:32 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:43.272 ************************************ 00:09:43.272 09:58:32 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:43.272 END TEST bdev_hello_world 00:09:43.272 ************************************ 00:09:43.272 09:58:32 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:09:43.272 09:58:32 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:43.272 09:58:32 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:43.272 09:58:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:43.272 ************************************ 00:09:43.272 START TEST bdev_bounds 00:09:43.272 ************************************ 00:09:43.272 09:58:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # bdev_bounds '' 00:09:43.272 Process bdevio pid: 67920 00:09:43.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.272 09:58:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=67920 00:09:43.272 09:58:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:43.272 09:58:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 67920' 00:09:43.272 09:58:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 67920 00:09:43.273 09:58:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:43.273 09:58:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@830 -- # '[' -z 67920 ']' 00:09:43.273 09:58:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.273 09:58:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:43.273 09:58:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.273 09:58:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:43.273 09:58:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:43.273 [2024-06-10 09:58:32.609029] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:09:43.273 [2024-06-10 09:58:32.609200] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67920 ] 00:09:43.273 [2024-06-10 09:58:32.782401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:43.531 [2024-06-10 09:58:33.017203] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.531 [2024-06-10 09:58:33.017341] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.531 [2024-06-10 09:58:33.017358] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.466 09:58:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:44.466 09:58:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@863 -- # return 0 00:09:44.466 09:58:33 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:44.466 I/O targets: 00:09:44.466 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:09:44.466 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:09:44.466 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:44.466 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:44.466 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:44.466 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:44.466 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:44.466 00:09:44.466 00:09:44.466 CUnit - A unit testing framework for C - Version 2.1-3 00:09:44.466 http://cunit.sourceforge.net/ 00:09:44.466 00:09:44.466 00:09:44.466 Suite: bdevio tests on: Nvme3n1 00:09:44.466 Test: blockdev write read block ...passed 00:09:44.466 Test: blockdev write zeroes read block ...passed 00:09:44.466 Test: blockdev write zeroes read no split ...passed 00:09:44.466 Test: blockdev write zeroes read split ...passed 00:09:44.466 Test: blockdev write zeroes read split partial ...passed 00:09:44.466 Test: blockdev reset ...[2024-06-10 09:58:33.838329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:44.466 [2024-06-10 09:58:33.842114] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:44.466 passed 00:09:44.466 Test: blockdev write read 8 blocks ...passed 00:09:44.466 Test: blockdev write read size > 128k ...passed 00:09:44.466 Test: blockdev write read invalid size ...passed 00:09:44.466 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:44.466 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:44.466 Test: blockdev write read max offset ...passed 00:09:44.467 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:44.467 Test: blockdev writev readv 8 blocks ...passed 00:09:44.467 Test: blockdev writev readv 30 x 1block ...passed 00:09:44.467 Test: blockdev writev readv block ...passed 00:09:44.467 Test: blockdev writev readv size > 128k ...passed 00:09:44.467 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:44.467 Test: blockdev comparev and writev ...[2024-06-10 09:58:33.854721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x272a0a000 len:0x1000 00:09:44.467 [2024-06-10 09:58:33.854791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:44.467 passed 00:09:44.467 Test: blockdev nvme passthru rw ...passed 00:09:44.467 Test: blockdev nvme passthru vendor specific ...[2024-06-10 09:58:33.855692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:44.467 passed 00:09:44.467 Test: blockdev nvme admin passthru ...[2024-06-10 09:58:33.855734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:44.467 passed 00:09:44.467 Test: blockdev copy ...passed 00:09:44.467 Suite: bdevio tests on: Nvme2n3 00:09:44.467 Test: blockdev write read block ...passed 00:09:44.467 Test: blockdev write zeroes read block ...passed 00:09:44.467 Test: blockdev write zeroes read no split ...passed 00:09:44.467 Test: blockdev write zeroes read split ...passed 00:09:44.467 Test: blockdev write zeroes read split partial ...passed 00:09:44.467 Test: blockdev reset ...[2024-06-10 09:58:33.924391] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:44.467 [2024-06-10 09:58:33.928312] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:44.467 passed 00:09:44.467 Test: blockdev write read 8 blocks ...passed 00:09:44.467 Test: blockdev write read size > 128k ...passed 00:09:44.467 Test: blockdev write read invalid size ...passed 00:09:44.467 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:44.467 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:44.467 Test: blockdev write read max offset ...passed 00:09:44.467 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:44.467 Test: blockdev writev readv 8 blocks ...passed 00:09:44.467 Test: blockdev writev readv 30 x 1block ...passed 00:09:44.467 Test: blockdev writev readv block ...passed 00:09:44.467 Test: blockdev writev readv size > 128k ...passed 00:09:44.467 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:44.467 Test: blockdev comparev and writev ...[2024-06-10 09:58:33.936090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x251904000 len:0x1000 00:09:44.467 [2024-06-10 09:58:33.936152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:44.467 passed 00:09:44.467 Test: blockdev nvme passthru rw ...passed 00:09:44.467 Test: blockdev nvme passthru vendor specific ...passed 00:09:44.467 Test: blockdev nvme admin passthru ...[2024-06-10 09:58:33.936989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:44.467 [2024-06-10 09:58:33.937032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:44.467 passed 00:09:44.467 Test: blockdev copy ...passed 00:09:44.467 Suite: bdevio tests on: Nvme2n2 00:09:44.467 Test: blockdev write read block ...passed 00:09:44.467 Test: blockdev write zeroes read block ...passed 00:09:44.467 Test: blockdev write zeroes read no split ...passed 00:09:44.467 Test: blockdev write zeroes read split ...passed 00:09:44.726 Test: blockdev write zeroes read split partial ...passed 00:09:44.726 Test: blockdev reset ...[2024-06-10 09:58:34.013744] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:44.726 [2024-06-10 09:58:34.017599] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:44.726 passed 00:09:44.726 Test: blockdev write read 8 blocks ...passed 00:09:44.726 Test: blockdev write read size > 128k ...passed 00:09:44.726 Test: blockdev write read invalid size ...passed 00:09:44.726 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:44.726 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:44.726 Test: blockdev write read max offset ...passed 00:09:44.726 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:44.726 Test: blockdev writev readv 8 blocks ...passed 00:09:44.726 Test: blockdev writev readv 30 x 1block ...passed 00:09:44.726 Test: blockdev writev readv block ...passed 00:09:44.726 Test: blockdev writev readv size > 128k ...passed 00:09:44.726 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:44.726 Test: blockdev comparev and writev ...[2024-06-10 09:58:34.026072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x251904000 len:0x1000 00:09:44.726 [2024-06-10 09:58:34.026133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:44.726 passed 00:09:44.726 Test: blockdev nvme passthru rw ...passed 00:09:44.726 Test: blockdev nvme passthru vendor specific ...passed 00:09:44.726 Test: blockdev nvme admin passthru ...[2024-06-10 09:58:34.026993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:44.726 [2024-06-10 09:58:34.027033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:44.726 passed 00:09:44.726 Test: blockdev copy ...passed 00:09:44.726 Suite: bdevio tests on: Nvme2n1 00:09:44.726 Test: blockdev write read block ...passed 00:09:44.726 Test: blockdev write zeroes read block ...passed 00:09:44.726 Test: blockdev write zeroes read no split ...passed 00:09:44.726 Test: blockdev write zeroes read split ...passed 00:09:44.726 Test: blockdev write zeroes read split partial ...passed 00:09:44.726 Test: blockdev reset ...[2024-06-10 09:58:34.100193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:44.726 [2024-06-10 09:58:34.103975] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:44.726 passed 00:09:44.726 Test: blockdev write read 8 blocks ...passed 00:09:44.726 Test: blockdev write read size > 128k ...passed 00:09:44.726 Test: blockdev write read invalid size ...passed 00:09:44.726 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:44.726 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:44.726 Test: blockdev write read max offset ...passed 00:09:44.726 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:44.726 Test: blockdev writev readv 8 blocks ...passed 00:09:44.726 Test: blockdev writev readv 30 x 1block ...passed 00:09:44.726 Test: blockdev writev readv block ...passed 00:09:44.726 Test: blockdev writev readv size > 128k ...passed 00:09:44.726 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:44.726 Test: blockdev comparev and writev ...[2024-06-10 09:58:34.113384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28143c000 len:0x1000 00:09:44.726 [2024-06-10 09:58:34.113453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:44.726 passed 00:09:44.726 Test: blockdev nvme passthru rw ...passed 00:09:44.726 Test: blockdev nvme passthru vendor specific ...[2024-06-10 09:58:34.114308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:44.726 [2024-06-10 09:58:34.114351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:44.726 passed 00:09:44.726 Test: blockdev nvme admin passthru ...passed 00:09:44.726 Test: blockdev copy ...passed 00:09:44.726 Suite: bdevio tests on: Nvme1n1 00:09:44.726 Test: blockdev write read block ...passed 00:09:44.726 Test: blockdev write zeroes read block ...passed 00:09:44.726 Test: blockdev write zeroes read no split ...passed 00:09:44.726 Test: blockdev write zeroes read split ...passed 00:09:44.726 Test: blockdev write zeroes read split partial ...passed 00:09:44.726 Test: blockdev reset ...[2024-06-10 09:58:34.189238] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:44.726 [2024-06-10 09:58:34.192776] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:44.726 passed 00:09:44.726 Test: blockdev write read 8 blocks ...passed 00:09:44.726 Test: blockdev write read size > 128k ...passed 00:09:44.726 Test: blockdev write read invalid size ...passed 00:09:44.726 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:44.726 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:44.726 Test: blockdev write read max offset ...passed 00:09:44.726 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:44.726 Test: blockdev writev readv 8 blocks ...passed 00:09:44.726 Test: blockdev writev readv 30 x 1block ...passed 00:09:44.726 Test: blockdev writev readv block ...passed 00:09:44.726 Test: blockdev writev readv size > 128k ...passed 00:09:44.726 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:44.726 Test: blockdev comparev and writev ...[2024-06-10 09:58:34.201897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x281438000 len:0x1000 00:09:44.726 [2024-06-10 09:58:34.201956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:44.726 passed 00:09:44.726 Test: blockdev nvme passthru rw ...passed 00:09:44.726 Test: blockdev nvme passthru vendor specific ...passed 00:09:44.726 Test: blockdev nvme admin passthru ...[2024-06-10 09:58:34.202784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:44.726 [2024-06-10 09:58:34.202827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:44.726 passed 00:09:44.727 Test: blockdev copy ...passed 00:09:44.727 Suite: bdevio tests on: Nvme0n1p2 00:09:44.727 Test: blockdev write read block ...passed 00:09:44.727 Test: blockdev write zeroes read block ...passed 00:09:44.727 Test: blockdev write zeroes read no split ...passed 00:09:44.985 Test: blockdev write zeroes read split ...passed 00:09:44.985 Test: blockdev write zeroes read split partial ...passed 00:09:44.985 Test: blockdev reset ...[2024-06-10 09:58:34.279372] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:44.985 [2024-06-10 09:58:34.283040] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:44.985 passed 00:09:44.985 Test: blockdev write read 8 blocks ...passed 00:09:44.985 Test: blockdev write read size > 128k ...passed 00:09:44.985 Test: blockdev write read invalid size ...passed 00:09:44.985 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:44.985 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:44.985 Test: blockdev write read max offset ...passed 00:09:44.985 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:44.985 Test: blockdev writev readv 8 blocks ...passed 00:09:44.985 Test: blockdev writev readv 30 x 1block ...passed 00:09:44.985 Test: blockdev writev readv block ...passed 00:09:44.985 Test: blockdev writev readv size > 128k ...passed 00:09:44.985 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:44.985 Test: blockdev comparev and writev ...passed 00:09:44.985 Test: blockdev nvme passthru rw ...passed 00:09:44.985 Test: blockdev nvme passthru vendor specific ...passed 00:09:44.985 Test: blockdev nvme admin passthru ...passed 00:09:44.985 Test: blockdev copy ...[2024-06-10 09:58:34.291119] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:09:44.985 separate metadata which is not supported yet. 00:09:44.985 passed 00:09:44.985 Suite: bdevio tests on: Nvme0n1p1 00:09:44.985 Test: blockdev write read block ...passed 00:09:44.985 Test: blockdev write zeroes read block ...passed 00:09:44.985 Test: blockdev write zeroes read no split ...passed 00:09:44.985 Test: blockdev write zeroes read split ...passed 00:09:44.985 Test: blockdev write zeroes read split partial ...passed 00:09:44.985 Test: blockdev reset ...[2024-06-10 09:58:34.357006] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:44.985 [2024-06-10 09:58:34.360533] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:44.985 passed 00:09:44.985 Test: blockdev write read 8 blocks ...passed 00:09:44.985 Test: blockdev write read size > 128k ...passed 00:09:44.985 Test: blockdev write read invalid size ...passed 00:09:44.985 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:44.985 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:44.985 Test: blockdev write read max offset ...passed 00:09:44.985 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:44.985 Test: blockdev writev readv 8 blocks ...passed 00:09:44.985 Test: blockdev writev readv 30 x 1block ...passed 00:09:44.985 Test: blockdev writev readv block ...passed 00:09:44.985 Test: blockdev writev readv size > 128k ...passed 00:09:44.985 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:44.985 Test: blockdev comparev and writev ...passed 00:09:44.985 Test: blockdev nvme passthru rw ...passed 00:09:44.985 Test: blockdev nvme passthru vendor specific ...passed 00:09:44.985 Test: blockdev nvme admin passthru ...passed 00:09:44.986 Test: blockdev copy ...[2024-06-10 09:58:34.368982] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:09:44.986 separate metadata which is not supported yet. 00:09:44.986 passed 00:09:44.986 00:09:44.986 Run Summary: Type Total Ran Passed Failed Inactive 00:09:44.986 suites 7 7 n/a 0 0 00:09:44.986 tests 161 161 161 0 0 00:09:44.986 asserts 1006 1006 1006 0 n/a 00:09:44.986 00:09:44.986 Elapsed time = 1.618 seconds 00:09:44.986 0 00:09:44.986 09:58:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 67920 00:09:44.986 09:58:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@949 -- # '[' -z 67920 ']' 00:09:44.986 09:58:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # kill -0 67920 00:09:44.986 09:58:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # uname 00:09:44.986 09:58:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:44.986 09:58:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 67920 00:09:44.986 09:58:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:44.986 09:58:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:44.986 09:58:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # echo 'killing process with pid 67920' 00:09:44.986 killing process with pid 67920 00:09:44.986 09:58:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # kill 67920 00:09:44.986 09:58:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # wait 67920 00:09:45.921 09:58:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:09:45.921 00:09:45.921 real 0m2.865s 00:09:45.921 user 0m7.037s 00:09:45.921 sys 0m0.348s 00:09:45.921 09:58:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:45.921 09:58:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:45.921 ************************************ 00:09:45.921 END TEST bdev_bounds 00:09:45.921 ************************************ 00:09:45.921 09:58:35 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:45.921 09:58:35 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:09:45.921 09:58:35 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:45.921 09:58:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:45.921 ************************************ 00:09:45.921 START TEST bdev_nbd 00:09:45.921 ************************************ 00:09:45.921 09:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:45.921 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=67980 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 67980 /var/tmp/spdk-nbd.sock 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@830 -- # '[' -z 67980 ']' 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:46.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:46.180 09:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:46.180 [2024-06-10 09:58:35.539335] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:09:46.180 [2024-06-10 09:58:35.539725] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.438 [2024-06-10 09:58:35.714962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.438 [2024-06-10 09:58:35.944377] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@863 -- # return 0 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:47.374 1+0 records in 00:09:47.374 1+0 records out 00:09:47.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578515 s, 7.1 MB/s 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:09:47.374 09:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.632 09:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:47.632 09:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:09:47.632 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:47.632 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:47.632 09:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:47.891 1+0 records in 00:09:47.891 1+0 records out 00:09:47.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427767 s, 9.6 MB/s 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:47.891 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:48.149 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:48.149 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:48.149 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:48.149 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd2 00:09:48.149 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:09:48.149 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:48.149 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:48.149 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd2 /proc/partitions 00:09:48.149 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:09:48.149 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:48.150 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:48.150 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:48.150 1+0 records in 00:09:48.150 1+0 records out 00:09:48.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000682916 s, 6.0 MB/s 00:09:48.150 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.150 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:09:48.150 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.150 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:48.150 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:09:48.150 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:48.150 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:48.150 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd3 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd3 /proc/partitions 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:48.408 1+0 records in 00:09:48.408 1+0 records out 00:09:48.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674257 s, 6.1 MB/s 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:48.408 09:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd4 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd4 /proc/partitions 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:48.667 1+0 records in 00:09:48.667 1+0 records out 00:09:48.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706157 s, 5.8 MB/s 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:48.667 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd5 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd5 /proc/partitions 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.235 1+0 records in 00:09:49.235 1+0 records out 00:09:49.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692515 s, 5.9 MB/s 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:49.235 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd6 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd6 /proc/partitions 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.493 1+0 records in 00:09:49.493 1+0 records out 00:09:49.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000723163 s, 5.7 MB/s 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:49.493 09:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:49.750 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:49.750 { 00:09:49.750 "nbd_device": "/dev/nbd0", 00:09:49.750 "bdev_name": "Nvme0n1p1" 00:09:49.750 }, 00:09:49.750 { 00:09:49.750 "nbd_device": "/dev/nbd1", 00:09:49.750 "bdev_name": "Nvme0n1p2" 00:09:49.750 }, 00:09:49.750 { 00:09:49.750 "nbd_device": "/dev/nbd2", 00:09:49.750 "bdev_name": "Nvme1n1" 00:09:49.750 }, 00:09:49.750 { 00:09:49.750 "nbd_device": "/dev/nbd3", 00:09:49.750 "bdev_name": "Nvme2n1" 00:09:49.750 }, 00:09:49.750 { 00:09:49.750 "nbd_device": "/dev/nbd4", 00:09:49.750 "bdev_name": "Nvme2n2" 00:09:49.750 }, 00:09:49.750 { 00:09:49.750 "nbd_device": "/dev/nbd5", 00:09:49.750 "bdev_name": "Nvme2n3" 00:09:49.750 }, 00:09:49.750 { 00:09:49.750 "nbd_device": "/dev/nbd6", 00:09:49.750 "bdev_name": "Nvme3n1" 00:09:49.750 } 00:09:49.750 ]' 00:09:49.750 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:49.750 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:49.750 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:49.750 { 00:09:49.750 "nbd_device": "/dev/nbd0", 00:09:49.750 "bdev_name": "Nvme0n1p1" 00:09:49.750 }, 00:09:49.750 { 00:09:49.750 "nbd_device": "/dev/nbd1", 00:09:49.750 "bdev_name": "Nvme0n1p2" 00:09:49.750 }, 00:09:49.750 { 00:09:49.750 "nbd_device": "/dev/nbd2", 00:09:49.750 "bdev_name": "Nvme1n1" 00:09:49.750 }, 00:09:49.750 { 00:09:49.750 "nbd_device": "/dev/nbd3", 00:09:49.750 "bdev_name": "Nvme2n1" 00:09:49.750 }, 00:09:49.750 { 00:09:49.750 "nbd_device": "/dev/nbd4", 00:09:49.750 "bdev_name": "Nvme2n2" 00:09:49.750 }, 00:09:49.750 { 00:09:49.750 "nbd_device": "/dev/nbd5", 00:09:49.750 "bdev_name": "Nvme2n3" 00:09:49.750 }, 00:09:49.750 { 00:09:49.750 "nbd_device": "/dev/nbd6", 00:09:49.750 "bdev_name": "Nvme3n1" 00:09:49.750 } 00:09:49.750 ]' 00:09:49.750 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:49.750 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.750 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:49.750 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:49.750 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:49.750 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.750 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:50.007 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:50.007 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:50.007 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:50.007 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.007 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.007 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:50.007 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:50.007 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.007 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.007 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:50.264 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:50.264 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:50.264 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:50.264 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.264 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.264 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:50.264 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:50.264 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.264 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.264 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:50.523 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:50.523 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:50.523 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:50.523 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.523 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.523 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:50.523 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:50.523 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.523 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.523 09:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:50.782 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:50.782 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:50.782 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:50.782 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.782 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.782 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:50.782 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:50.782 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.782 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.782 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:51.349 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:51.349 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:51.349 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:51.349 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.349 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.350 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:51.350 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:51.350 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.350 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.350 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:51.608 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:51.608 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:51.608 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:51.608 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.608 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.608 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:51.608 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:51.608 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.608 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.608 09:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:51.867 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:51.867 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:51.867 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:51.867 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.867 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.867 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:51.867 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:51.867 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.867 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:51.867 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.867 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:52.125 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:09:52.383 /dev/nbd0 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:52.383 1+0 records in 00:09:52.383 1+0 records out 00:09:52.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503581 s, 8.1 MB/s 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:52.383 09:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:09:52.642 /dev/nbd1 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:52.642 1+0 records in 00:09:52.642 1+0 records out 00:09:52.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670806 s, 6.1 MB/s 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:52.642 09:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:09:52.900 /dev/nbd10 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd10 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd10 /proc/partitions 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:53.158 1+0 records in 00:09:53.158 1+0 records out 00:09:53.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058843 s, 7.0 MB/s 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:53.158 09:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:53.417 /dev/nbd11 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd11 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd11 /proc/partitions 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:53.417 1+0 records in 00:09:53.417 1+0 records out 00:09:53.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741453 s, 5.5 MB/s 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:53.417 09:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:53.675 /dev/nbd12 00:09:53.675 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:53.675 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:53.676 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd12 00:09:53.676 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:09:53.676 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:53.676 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:53.676 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd12 /proc/partitions 00:09:53.676 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:09:53.676 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:53.676 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:53.676 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:53.676 1+0 records in 00:09:53.676 1+0 records out 00:09:53.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000881148 s, 4.6 MB/s 00:09:53.676 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.676 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:09:53.676 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.676 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:53.676 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:09:53.676 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:53.676 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:53.676 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:53.934 /dev/nbd13 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd13 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd13 /proc/partitions 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:53.934 1+0 records in 00:09:53.934 1+0 records out 00:09:53.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637722 s, 6.4 MB/s 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:53.934 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:54.192 /dev/nbd14 00:09:54.192 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:54.192 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:54.192 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd14 00:09:54.192 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:09:54.192 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:54.192 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:54.193 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd14 /proc/partitions 00:09:54.193 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:09:54.193 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:54.193 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:54.193 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:54.193 1+0 records in 00:09:54.193 1+0 records out 00:09:54.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00077191 s, 5.3 MB/s 00:09:54.193 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.193 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:09:54.193 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.193 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:54.193 09:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:09:54.193 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:54.193 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:54.193 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:54.193 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.193 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:54.451 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:54.451 { 00:09:54.451 "nbd_device": "/dev/nbd0", 00:09:54.451 "bdev_name": "Nvme0n1p1" 00:09:54.451 }, 00:09:54.451 { 00:09:54.451 "nbd_device": "/dev/nbd1", 00:09:54.451 "bdev_name": "Nvme0n1p2" 00:09:54.451 }, 00:09:54.451 { 00:09:54.451 "nbd_device": "/dev/nbd10", 00:09:54.451 "bdev_name": "Nvme1n1" 00:09:54.451 }, 00:09:54.451 { 00:09:54.451 "nbd_device": "/dev/nbd11", 00:09:54.451 "bdev_name": "Nvme2n1" 00:09:54.451 }, 00:09:54.451 { 00:09:54.451 "nbd_device": "/dev/nbd12", 00:09:54.451 "bdev_name": "Nvme2n2" 00:09:54.451 }, 00:09:54.451 { 00:09:54.451 "nbd_device": "/dev/nbd13", 00:09:54.451 "bdev_name": "Nvme2n3" 00:09:54.451 }, 00:09:54.451 { 00:09:54.451 "nbd_device": "/dev/nbd14", 00:09:54.451 "bdev_name": "Nvme3n1" 00:09:54.451 } 00:09:54.451 ]' 00:09:54.452 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:54.452 { 00:09:54.452 "nbd_device": "/dev/nbd0", 00:09:54.452 "bdev_name": "Nvme0n1p1" 00:09:54.452 }, 00:09:54.452 { 00:09:54.452 "nbd_device": "/dev/nbd1", 00:09:54.452 "bdev_name": "Nvme0n1p2" 00:09:54.452 }, 00:09:54.452 { 00:09:54.452 "nbd_device": "/dev/nbd10", 00:09:54.452 "bdev_name": "Nvme1n1" 00:09:54.452 }, 00:09:54.452 { 00:09:54.452 "nbd_device": "/dev/nbd11", 00:09:54.452 "bdev_name": "Nvme2n1" 00:09:54.452 }, 00:09:54.452 { 00:09:54.452 "nbd_device": "/dev/nbd12", 00:09:54.452 "bdev_name": "Nvme2n2" 00:09:54.452 }, 00:09:54.452 { 00:09:54.452 "nbd_device": "/dev/nbd13", 00:09:54.452 "bdev_name": "Nvme2n3" 00:09:54.452 }, 00:09:54.452 { 00:09:54.452 "nbd_device": "/dev/nbd14", 00:09:54.452 "bdev_name": "Nvme3n1" 00:09:54.452 } 00:09:54.452 ]' 00:09:54.452 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:54.452 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:54.452 /dev/nbd1 00:09:54.452 /dev/nbd10 00:09:54.452 /dev/nbd11 00:09:54.452 /dev/nbd12 00:09:54.452 /dev/nbd13 00:09:54.452 /dev/nbd14' 00:09:54.452 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:54.452 /dev/nbd1 00:09:54.452 /dev/nbd10 00:09:54.452 /dev/nbd11 00:09:54.452 /dev/nbd12 00:09:54.452 /dev/nbd13 00:09:54.452 /dev/nbd14' 00:09:54.452 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:54.452 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:54.452 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:54.452 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:54.452 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:54.452 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:54.452 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:54.452 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:54.452 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:54.452 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:54.452 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:54.452 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:54.711 256+0 records in 00:09:54.711 256+0 records out 00:09:54.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0079368 s, 132 MB/s 00:09:54.711 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:54.711 09:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:54.711 256+0 records in 00:09:54.711 256+0 records out 00:09:54.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170333 s, 6.2 MB/s 00:09:54.711 09:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:54.711 09:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:54.969 256+0 records in 00:09:54.969 256+0 records out 00:09:54.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164982 s, 6.4 MB/s 00:09:54.969 09:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:54.969 09:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:54.969 256+0 records in 00:09:54.969 256+0 records out 00:09:54.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156925 s, 6.7 MB/s 00:09:54.969 09:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:54.969 09:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:55.227 256+0 records in 00:09:55.227 256+0 records out 00:09:55.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165371 s, 6.3 MB/s 00:09:55.227 09:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:55.227 09:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:55.485 256+0 records in 00:09:55.485 256+0 records out 00:09:55.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165248 s, 6.3 MB/s 00:09:55.485 09:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:55.485 09:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:55.485 256+0 records in 00:09:55.485 256+0 records out 00:09:55.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153124 s, 6.8 MB/s 00:09:55.485 09:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:55.485 09:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:55.744 256+0 records in 00:09:55.744 256+0 records out 00:09:55.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167269 s, 6.3 MB/s 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:55.744 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:56.003 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:56.003 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:56.003 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:56.003 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.003 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.003 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:56.003 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:56.003 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.003 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.003 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:56.262 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:56.262 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:56.262 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:56.262 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.262 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.262 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:56.262 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:56.262 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.262 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.262 09:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:56.520 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:56.520 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:56.520 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:56.520 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.520 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.520 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:56.520 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:56.520 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.520 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.520 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:57.088 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:57.088 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:57.088 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:57.088 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:57.088 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:57.088 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:57.089 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:57.089 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:57.089 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:57.089 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:57.089 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:57.089 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:57.089 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:57.089 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:57.089 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:57.089 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:57.347 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:57.347 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:57.347 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:57.347 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:57.606 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:57.606 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:57.606 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:57.606 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:57.606 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:57.606 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:57.606 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:57.606 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:57.606 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:57.606 09:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:57.864 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:57.864 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:57.864 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:57.864 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:57.864 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:57.864 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:57.864 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:57.864 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:57.864 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:57.864 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.864 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:58.123 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:58.381 malloc_lvol_verify 00:09:58.381 09:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:58.641 07c29de4-c2eb-4629-ad94-989f39db11e0 00:09:58.641 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:58.899 26163819-b1b7-4ada-9b07-b1143c5e7a21 00:09:58.899 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:59.159 /dev/nbd0 00:09:59.159 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:59.159 mke2fs 1.46.5 (30-Dec-2021) 00:09:59.159 Discarding device blocks: 0/4096 done 00:09:59.159 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:59.159 00:09:59.159 Allocating group tables: 0/1 done 00:09:59.159 Writing inode tables: 0/1 done 00:09:59.159 Creating journal (1024 blocks): done 00:09:59.159 Writing superblocks and filesystem accounting information: 0/1 done 00:09:59.159 00:09:59.159 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:59.159 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:59.159 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.159 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:59.159 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:59.159 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:59.159 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:59.159 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 67980 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@949 -- # '[' -z 67980 ']' 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # kill -0 67980 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # uname 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 67980 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:59.418 killing process with pid 67980 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # echo 'killing process with pid 67980' 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # kill 67980 00:09:59.418 09:58:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # wait 67980 00:10:00.794 09:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:10:00.794 00:10:00.794 real 0m14.702s 00:10:00.794 user 0m20.845s 00:10:00.794 sys 0m4.802s 00:10:00.794 09:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:00.794 09:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:00.794 ************************************ 00:10:00.794 END TEST bdev_nbd 00:10:00.794 ************************************ 00:10:00.794 09:58:50 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:10:00.794 09:58:50 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:10:00.794 09:58:50 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:10:00.794 skipping fio tests on NVMe due to multi-ns failures. 00:10:00.795 09:58:50 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:00.795 09:58:50 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:00.795 09:58:50 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:00.795 09:58:50 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:10:00.795 09:58:50 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:00.795 09:58:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:00.795 ************************************ 00:10:00.795 START TEST bdev_verify 00:10:00.795 ************************************ 00:10:00.795 09:58:50 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:00.795 [2024-06-10 09:58:50.293298] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:10:00.795 [2024-06-10 09:58:50.293493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68432 ] 00:10:01.053 [2024-06-10 09:58:50.470545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:01.312 [2024-06-10 09:58:50.698147] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.312 [2024-06-10 09:58:50.698151] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.880 Running I/O for 5 seconds... 00:10:07.163 00:10:07.163 Latency(us) 00:10:07.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.163 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:07.163 Verification LBA range: start 0x0 length 0x5e800 00:10:07.163 Nvme0n1p1 : 5.09 1357.05 5.30 0.00 0.00 94122.24 15847.80 87699.08 00:10:07.163 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:07.163 Verification LBA range: start 0x5e800 length 0x5e800 00:10:07.163 Nvme0n1p1 : 5.08 1285.95 5.02 0.00 0.00 99307.73 18945.86 93895.21 00:10:07.163 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:07.163 Verification LBA range: start 0x0 length 0x5e7ff 00:10:07.163 Nvme0n1p2 : 5.10 1356.51 5.30 0.00 0.00 93961.55 15609.48 82932.83 00:10:07.163 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:07.163 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:10:07.163 Nvme0n1p2 : 5.08 1285.42 5.02 0.00 0.00 99144.68 19303.33 91035.46 00:10:07.163 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:07.163 Verification LBA range: start 0x0 length 0xa0000 00:10:07.163 Nvme1n1 : 5.10 1356.02 5.30 0.00 0.00 93781.41 15966.95 80073.08 00:10:07.163 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:07.163 Verification LBA range: start 0xa0000 length 0xa0000 00:10:07.163 Nvme1n1 : 5.08 1284.98 5.02 0.00 0.00 98953.36 19899.11 87222.46 00:10:07.163 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:07.163 Verification LBA range: start 0x0 length 0x80000 00:10:07.163 Nvme2n1 : 5.10 1355.55 5.30 0.00 0.00 93602.86 16205.27 77213.32 00:10:07.163 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:07.163 Verification LBA range: start 0x80000 length 0x80000 00:10:07.163 Nvme2n1 : 5.08 1284.58 5.02 0.00 0.00 98811.19 19184.17 84362.71 00:10:07.163 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:07.163 Verification LBA range: start 0x0 length 0x80000 00:10:07.163 Nvme2n2 : 5.10 1355.07 5.29 0.00 0.00 93440.68 16324.42 79596.45 00:10:07.163 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:07.163 Verification LBA range: start 0x80000 length 0x80000 00:10:07.163 Nvme2n2 : 5.08 1284.18 5.02 0.00 0.00 98643.43 19422.49 86269.21 00:10:07.163 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:07.163 Verification LBA range: start 0x0 length 0x80000 00:10:07.163 Nvme2n3 : 5.10 1354.59 5.29 0.00 0.00 93302.48 16562.73 82456.20 00:10:07.163 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:07.163 Verification LBA range: start 0x80000 length 0x80000 00:10:07.163 Nvme2n3 : 5.09 1283.77 5.01 0.00 0.00 98482.21 18588.39 90082.21 00:10:07.163 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:07.163 Verification LBA range: start 0x0 length 0x20000 00:10:07.163 Nvme3n1 : 5.10 1354.11 5.29 0.00 0.00 93153.40 14477.50 83886.08 00:10:07.163 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:07.163 Verification LBA range: start 0x20000 length 0x20000 00:10:07.163 Nvme3n1 : 5.09 1283.37 5.01 0.00 0.00 98331.19 15609.48 93895.21 00:10:07.163 =================================================================================================================== 00:10:07.163 Total : 18481.16 72.19 0.00 0.00 96142.93 14477.50 93895.21 00:10:08.539 00:10:08.539 real 0m7.749s 00:10:08.539 user 0m14.122s 00:10:08.539 sys 0m0.241s 00:10:08.539 09:58:57 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:08.539 09:58:57 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:08.539 ************************************ 00:10:08.539 END TEST bdev_verify 00:10:08.539 ************************************ 00:10:08.539 09:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:08.539 09:58:57 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:10:08.539 09:58:57 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:08.539 09:58:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:08.539 ************************************ 00:10:08.539 START TEST bdev_verify_big_io 00:10:08.539 ************************************ 00:10:08.539 09:58:57 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:08.798 [2024-06-10 09:58:58.089942] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:10:08.798 [2024-06-10 09:58:58.090183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68536 ] 00:10:08.798 [2024-06-10 09:58:58.262173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:09.056 [2024-06-10 09:58:58.491721] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.056 [2024-06-10 09:58:58.491729] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.992 Running I/O for 5 seconds... 00:10:16.591 00:10:16.591 Latency(us) 00:10:16.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.591 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:16.591 Verification LBA range: start 0x0 length 0x5e80 00:10:16.591 Nvme0n1p1 : 5.77 116.54 7.28 0.00 0.00 1054864.60 18826.71 1143901.09 00:10:16.591 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:16.591 Verification LBA range: start 0x5e80 length 0x5e80 00:10:16.591 Nvme0n1p1 : 5.88 100.62 6.29 0.00 0.00 1205150.28 15252.01 1738729.66 00:10:16.591 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:16.591 Verification LBA range: start 0x0 length 0x5e7f 00:10:16.591 Nvme0n1p2 : 5.91 118.27 7.39 0.00 0.00 1017623.30 59578.18 1044763.00 00:10:16.591 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:16.591 Verification LBA range: start 0x5e7f length 0x5e7f 00:10:16.591 Nvme0n1p2 : 5.78 102.47 6.40 0.00 0.00 1165694.63 30742.34 1761607.68 00:10:16.591 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:16.591 Verification LBA range: start 0x0 length 0xa000 00:10:16.591 Nvme1n1 : 5.86 95.45 5.97 0.00 0.00 1229499.70 84839.33 1998013.91 00:10:16.591 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:16.591 Verification LBA range: start 0xa000 length 0xa000 00:10:16.591 Nvme1n1 : 5.89 104.83 6.55 0.00 0.00 1105277.43 50998.92 1792111.71 00:10:16.591 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:16.591 Verification LBA range: start 0x0 length 0x8000 00:10:16.591 Nvme2n1 : 5.91 120.26 7.52 0.00 0.00 951709.08 85315.96 1021884.97 00:10:16.591 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:16.591 Verification LBA range: start 0x8000 length 0x8000 00:10:16.591 Nvme2n1 : 5.95 115.65 7.23 0.00 0.00 982118.62 59578.18 1265917.21 00:10:16.591 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:16.591 Verification LBA range: start 0x0 length 0x8000 00:10:16.591 Nvme2n2 : 5.95 128.97 8.06 0.00 0.00 865185.98 39321.60 968502.92 00:10:16.591 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:16.591 Verification LBA range: start 0x8000 length 0x8000 00:10:16.591 Nvme2n2 : 5.99 119.86 7.49 0.00 0.00 921083.50 37891.72 1296421.24 00:10:16.591 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:16.591 Verification LBA range: start 0x0 length 0x8000 00:10:16.591 Nvme2n3 : 6.00 132.56 8.29 0.00 0.00 816160.39 30504.03 999006.95 00:10:16.591 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:16.591 Verification LBA range: start 0x8000 length 0x8000 00:10:16.591 Nvme2n3 : 6.03 119.20 7.45 0.00 0.00 896712.13 38606.66 1921753.83 00:10:16.591 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:16.591 Verification LBA range: start 0x0 length 0x2000 00:10:16.591 Nvme3n1 : 6.04 148.24 9.27 0.00 0.00 712372.50 6374.87 1021884.97 00:10:16.591 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:16.591 Verification LBA range: start 0x2000 length 0x2000 00:10:16.591 Nvme3n1 : 6.08 139.39 8.71 0.00 0.00 748429.48 755.90 1937005.85 00:10:16.591 =================================================================================================================== 00:10:16.591 Total : 1662.33 103.90 0.00 0.00 955831.76 755.90 1998013.91 00:10:17.970 00:10:17.970 real 0m9.118s 00:10:17.970 user 0m16.810s 00:10:17.970 sys 0m0.280s 00:10:17.970 09:59:07 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:17.970 ************************************ 00:10:17.970 END TEST bdev_verify_big_io 00:10:17.970 09:59:07 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.970 ************************************ 00:10:17.970 09:59:07 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:17.970 09:59:07 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:10:17.970 09:59:07 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:17.970 09:59:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:17.970 ************************************ 00:10:17.970 START TEST bdev_write_zeroes 00:10:17.970 ************************************ 00:10:17.970 09:59:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:17.970 [2024-06-10 09:59:07.255916] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:10:17.970 [2024-06-10 09:59:07.256077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68651 ] 00:10:17.970 [2024-06-10 09:59:07.428975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.229 [2024-06-10 09:59:07.656081] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.796 Running I/O for 1 seconds... 00:10:20.172 00:10:20.172 Latency(us) 00:10:20.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.172 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.172 Nvme0n1p1 : 1.02 5150.51 20.12 0.00 0.00 24787.49 9472.93 50998.92 00:10:20.172 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.172 Nvme0n1p2 : 1.02 5143.67 20.09 0.00 0.00 24773.85 9592.09 50283.99 00:10:20.172 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.172 Nvme1n1 : 1.02 5137.30 20.07 0.00 0.00 24724.74 10009.13 49330.73 00:10:20.172 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.172 Nvme2n1 : 1.02 5131.01 20.04 0.00 0.00 24644.26 10426.18 48615.80 00:10:20.172 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.172 Nvme2n2 : 1.02 5124.80 20.02 0.00 0.00 24621.96 10426.18 49569.05 00:10:20.172 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.172 Nvme2n3 : 1.03 5118.43 19.99 0.00 0.00 24603.65 10247.45 49569.05 00:10:20.172 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.172 Nvme3n1 : 1.03 5111.94 19.97 0.00 0.00 24586.05 10009.13 49569.05 00:10:20.172 =================================================================================================================== 00:10:20.172 Total : 35917.66 140.30 0.00 0.00 24677.43 9472.93 50998.92 00:10:21.106 00:10:21.106 real 0m3.328s 00:10:21.106 user 0m2.969s 00:10:21.106 sys 0m0.237s 00:10:21.106 09:59:10 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:21.106 ************************************ 00:10:21.106 END TEST bdev_write_zeroes 00:10:21.106 ************************************ 00:10:21.106 09:59:10 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:21.106 09:59:10 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:21.106 09:59:10 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:10:21.106 09:59:10 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:21.106 09:59:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:21.106 ************************************ 00:10:21.106 START TEST bdev_json_nonenclosed 00:10:21.106 ************************************ 00:10:21.106 09:59:10 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:21.365 [2024-06-10 09:59:10.643111] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:10:21.365 [2024-06-10 09:59:10.643285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68708 ] 00:10:21.365 [2024-06-10 09:59:10.820863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.623 [2024-06-10 09:59:11.009515] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.623 [2024-06-10 09:59:11.009630] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:21.623 [2024-06-10 09:59:11.009674] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:21.623 [2024-06-10 09:59:11.009690] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:22.191 00:10:22.191 real 0m0.924s 00:10:22.191 user 0m0.677s 00:10:22.191 sys 0m0.140s 00:10:22.191 09:59:11 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:22.191 09:59:11 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:22.191 ************************************ 00:10:22.191 END TEST bdev_json_nonenclosed 00:10:22.191 ************************************ 00:10:22.191 09:59:11 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:22.191 09:59:11 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:10:22.191 09:59:11 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:22.191 09:59:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:22.191 ************************************ 00:10:22.191 START TEST bdev_json_nonarray 00:10:22.191 ************************************ 00:10:22.191 09:59:11 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:22.191 [2024-06-10 09:59:11.636385] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:10:22.191 [2024-06-10 09:59:11.636611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68735 ] 00:10:22.449 [2024-06-10 09:59:11.822140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.708 [2024-06-10 09:59:12.044829] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.708 [2024-06-10 09:59:12.044944] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:22.708 [2024-06-10 09:59:12.044973] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:22.708 [2024-06-10 09:59:12.044988] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:22.966 00:10:22.966 real 0m0.946s 00:10:22.966 user 0m0.699s 00:10:22.966 sys 0m0.139s 00:10:22.966 09:59:12 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:22.966 ************************************ 00:10:22.966 END TEST bdev_json_nonarray 00:10:22.966 ************************************ 00:10:22.966 09:59:12 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:23.225 09:59:12 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:10:23.225 09:59:12 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:10:23.225 09:59:12 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:23.225 09:59:12 blockdev_nvme_gpt -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:23.225 09:59:12 blockdev_nvme_gpt -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:23.225 09:59:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:23.225 ************************************ 00:10:23.225 START TEST bdev_gpt_uuid 00:10:23.225 ************************************ 00:10:23.225 09:59:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # bdev_gpt_uuid 00:10:23.225 09:59:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:10:23.225 09:59:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:10:23.225 09:59:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=68766 00:10:23.225 09:59:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:23.225 09:59:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:23.225 09:59:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 68766 00:10:23.225 09:59:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@830 -- # '[' -z 68766 ']' 00:10:23.225 09:59:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.225 09:59:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:23.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.225 09:59:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.225 09:59:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:23.225 09:59:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:23.225 [2024-06-10 09:59:12.639789] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:10:23.225 [2024-06-10 09:59:12.639937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68766 ] 00:10:23.483 [2024-06-10 09:59:12.805753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.740 [2024-06-10 09:59:13.030402] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.303 09:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:24.303 09:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@863 -- # return 0 00:10:24.303 09:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:24.303 09:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:24.303 09:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:24.561 Some configs were skipped because the RPC state that can call them passed over. 00:10:24.561 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:24.561 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:10:24.561 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:24.561 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:24.561 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:24.561 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:24.561 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:24.561 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:24.820 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:24.820 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:10:24.820 { 00:10:24.820 "name": "Nvme0n1p1", 00:10:24.820 "aliases": [ 00:10:24.820 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:24.820 ], 00:10:24.820 "product_name": "GPT Disk", 00:10:24.820 "block_size": 4096, 00:10:24.820 "num_blocks": 774144, 00:10:24.820 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:24.820 "md_size": 64, 00:10:24.820 "md_interleave": false, 00:10:24.820 "dif_type": 0, 00:10:24.820 "assigned_rate_limits": { 00:10:24.820 "rw_ios_per_sec": 0, 00:10:24.820 "rw_mbytes_per_sec": 0, 00:10:24.820 "r_mbytes_per_sec": 0, 00:10:24.820 "w_mbytes_per_sec": 0 00:10:24.820 }, 00:10:24.820 "claimed": false, 00:10:24.820 "zoned": false, 00:10:24.820 "supported_io_types": { 00:10:24.820 "read": true, 00:10:24.820 "write": true, 00:10:24.820 "unmap": true, 00:10:24.820 "write_zeroes": true, 00:10:24.820 "flush": true, 00:10:24.820 "reset": true, 00:10:24.820 "compare": true, 00:10:24.820 "compare_and_write": false, 00:10:24.820 "abort": true, 00:10:24.820 "nvme_admin": false, 00:10:24.820 "nvme_io": false 00:10:24.820 }, 00:10:24.820 "driver_specific": { 00:10:24.820 "gpt": { 00:10:24.820 "base_bdev": "Nvme0n1", 00:10:24.820 "offset_blocks": 256, 00:10:24.820 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:24.820 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:24.820 "partition_name": "SPDK_TEST_first" 00:10:24.820 } 00:10:24.820 } 00:10:24.820 } 00:10:24.820 ]' 00:10:24.820 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:10:24.820 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:10:24.820 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:10:24.820 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:24.820 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:24.820 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:24.820 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:24.820 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:24.820 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:24.820 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:24.820 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:10:24.820 { 00:10:24.820 "name": "Nvme0n1p2", 00:10:24.820 "aliases": [ 00:10:24.820 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:24.820 ], 00:10:24.820 "product_name": "GPT Disk", 00:10:24.820 "block_size": 4096, 00:10:24.820 "num_blocks": 774143, 00:10:24.820 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:24.820 "md_size": 64, 00:10:24.820 "md_interleave": false, 00:10:24.820 "dif_type": 0, 00:10:24.820 "assigned_rate_limits": { 00:10:24.820 "rw_ios_per_sec": 0, 00:10:24.820 "rw_mbytes_per_sec": 0, 00:10:24.820 "r_mbytes_per_sec": 0, 00:10:24.820 "w_mbytes_per_sec": 0 00:10:24.820 }, 00:10:24.820 "claimed": false, 00:10:24.820 "zoned": false, 00:10:24.820 "supported_io_types": { 00:10:24.820 "read": true, 00:10:24.820 "write": true, 00:10:24.820 "unmap": true, 00:10:24.820 "write_zeroes": true, 00:10:24.820 "flush": true, 00:10:24.820 "reset": true, 00:10:24.820 "compare": true, 00:10:24.820 "compare_and_write": false, 00:10:24.820 "abort": true, 00:10:24.820 "nvme_admin": false, 00:10:24.820 "nvme_io": false 00:10:24.820 }, 00:10:24.820 "driver_specific": { 00:10:24.820 "gpt": { 00:10:24.820 "base_bdev": "Nvme0n1", 00:10:24.820 "offset_blocks": 774400, 00:10:24.820 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:24.820 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:24.820 "partition_name": "SPDK_TEST_second" 00:10:24.820 } 00:10:24.820 } 00:10:24.820 } 00:10:24.820 ]' 00:10:24.820 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:10:24.820 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:10:24.820 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:10:25.078 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:25.078 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:25.078 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:25.078 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 68766 00:10:25.078 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@949 -- # '[' -z 68766 ']' 00:10:25.078 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # kill -0 68766 00:10:25.078 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # uname 00:10:25.078 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:25.078 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 68766 00:10:25.078 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:25.078 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:25.078 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 68766' 00:10:25.078 killing process with pid 68766 00:10:25.078 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # kill 68766 00:10:25.078 09:59:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # wait 68766 00:10:27.608 00:10:27.608 real 0m4.000s 00:10:27.608 user 0m4.349s 00:10:27.608 sys 0m0.432s 00:10:27.608 09:59:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:27.608 09:59:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:27.608 ************************************ 00:10:27.608 END TEST bdev_gpt_uuid 00:10:27.608 ************************************ 00:10:27.608 09:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:10:27.608 09:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:10:27.608 09:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:10:27.608 09:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:27.608 09:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:27.608 09:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:10:27.608 09:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:10:27.608 09:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:10:27.608 09:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:27.608 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:27.608 Waiting for block devices as requested 00:10:27.608 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:27.866 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:27.866 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:27.866 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:33.139 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:33.139 09:59:22 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:10:33.139 09:59:22 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:10:33.397 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:33.397 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:10:33.397 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:33.397 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:10:33.397 09:59:22 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:10:33.397 00:10:33.397 real 1m5.740s 00:10:33.397 user 1m24.037s 00:10:33.397 sys 0m9.811s 00:10:33.397 09:59:22 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:33.397 09:59:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:33.397 ************************************ 00:10:33.397 END TEST blockdev_nvme_gpt 00:10:33.397 ************************************ 00:10:33.397 09:59:22 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:33.397 09:59:22 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:33.397 09:59:22 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:33.397 09:59:22 -- common/autotest_common.sh@10 -- # set +x 00:10:33.397 ************************************ 00:10:33.397 START TEST nvme 00:10:33.397 ************************************ 00:10:33.397 09:59:22 nvme -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:33.397 * Looking for test storage... 00:10:33.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:33.397 09:59:22 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:33.962 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:34.528 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.528 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.528 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.528 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.528 09:59:24 nvme -- nvme/nvme.sh@79 -- # uname 00:10:34.528 09:59:24 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:34.528 09:59:24 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:34.528 09:59:24 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:34.528 09:59:24 nvme -- common/autotest_common.sh@1081 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:34.528 09:59:24 nvme -- common/autotest_common.sh@1067 -- # _randomize_va_space=2 00:10:34.528 09:59:24 nvme -- common/autotest_common.sh@1068 -- # echo 0 00:10:34.528 09:59:24 nvme -- common/autotest_common.sh@1070 -- # stubpid=69406 00:10:34.528 Waiting for stub to ready for secondary processes... 00:10:34.528 09:59:24 nvme -- common/autotest_common.sh@1071 -- # echo Waiting for stub to ready for secondary processes... 00:10:34.528 09:59:24 nvme -- common/autotest_common.sh@1069 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:34.528 09:59:24 nvme -- common/autotest_common.sh@1072 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:34.528 09:59:24 nvme -- common/autotest_common.sh@1074 -- # [[ -e /proc/69406 ]] 00:10:34.528 09:59:24 nvme -- common/autotest_common.sh@1075 -- # sleep 1s 00:10:34.786 [2024-06-10 09:59:24.075359] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:10:34.786 [2024-06-10 09:59:24.075553] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:10:35.354 [2024-06-10 09:59:24.844495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:35.612 [2024-06-10 09:59:25.023318] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.612 [2024-06-10 09:59:25.023458] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.612 [2024-06-10 09:59:25.023484] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.612 09:59:25 nvme -- common/autotest_common.sh@1072 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:35.612 09:59:25 nvme -- common/autotest_common.sh@1074 -- # [[ -e /proc/69406 ]] 00:10:35.612 09:59:25 nvme -- common/autotest_common.sh@1075 -- # sleep 1s 00:10:35.612 [2024-06-10 09:59:25.041399] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:10:35.612 [2024-06-10 09:59:25.041693] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:35.612 [2024-06-10 09:59:25.052934] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:35.612 [2024-06-10 09:59:25.053181] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:35.612 [2024-06-10 09:59:25.057479] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:35.612 [2024-06-10 09:59:25.058793] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:35.612 [2024-06-10 09:59:25.058956] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:35.612 [2024-06-10 09:59:25.063341] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:35.612 [2024-06-10 09:59:25.063900] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:35.612 [2024-06-10 09:59:25.064065] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:35.612 [2024-06-10 09:59:25.068293] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:35.612 [2024-06-10 09:59:25.068519] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:35.612 [2024-06-10 09:59:25.068610] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:35.612 [2024-06-10 09:59:25.068698] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:35.612 [2024-06-10 09:59:25.068753] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:36.547 done. 00:10:36.547 09:59:26 nvme -- common/autotest_common.sh@1072 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:36.547 09:59:26 nvme -- common/autotest_common.sh@1077 -- # echo done. 00:10:36.547 09:59:26 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:36.547 09:59:26 nvme -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:10:36.547 09:59:26 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:36.547 09:59:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:36.547 ************************************ 00:10:36.547 START TEST nvme_reset 00:10:36.547 ************************************ 00:10:36.547 09:59:26 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:36.806 Initializing NVMe Controllers 00:10:36.806 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:36.806 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:36.806 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:36.806 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:36.806 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:36.806 00:10:36.806 real 0m0.273s 00:10:36.806 user 0m0.099s 00:10:36.806 sys 0m0.125s 00:10:36.806 09:59:26 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:36.806 09:59:26 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:10:36.806 ************************************ 00:10:36.806 END TEST nvme_reset 00:10:36.806 ************************************ 00:10:37.064 09:59:26 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:37.064 09:59:26 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:37.064 09:59:26 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:37.064 09:59:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:37.064 ************************************ 00:10:37.064 START TEST nvme_identify 00:10:37.064 ************************************ 00:10:37.064 09:59:26 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # nvme_identify 00:10:37.064 09:59:26 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:10:37.064 09:59:26 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:37.064 09:59:26 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:37.064 09:59:26 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:37.064 09:59:26 nvme.nvme_identify -- common/autotest_common.sh@1512 -- # bdfs=() 00:10:37.064 09:59:26 nvme.nvme_identify -- common/autotest_common.sh@1512 -- # local bdfs 00:10:37.064 09:59:26 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:37.064 09:59:26 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:37.064 09:59:26 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:10:37.064 09:59:26 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # (( 4 == 0 )) 00:10:37.064 09:59:26 nvme.nvme_identify -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:37.064 09:59:26 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:37.324 [2024-06-10 09:59:26.648193] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 69439 terminated unexpected 00:10:37.324 ===================================================== 00:10:37.324 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:37.324 ===================================================== 00:10:37.324 Controller Capabilities/Features 00:10:37.324 ================================ 00:10:37.324 Vendor ID: 1b36 00:10:37.324 Subsystem Vendor ID: 1af4 00:10:37.324 Serial Number: 12340 00:10:37.324 Model Number: QEMU NVMe Ctrl 00:10:37.324 Firmware Version: 8.0.0 00:10:37.324 Recommended Arb Burst: 6 00:10:37.324 IEEE OUI Identifier: 00 54 52 00:10:37.324 Multi-path I/O 00:10:37.324 May have multiple subsystem ports: No 00:10:37.324 May have multiple controllers: No 00:10:37.324 Associated with SR-IOV VF: No 00:10:37.324 Max Data Transfer Size: 524288 00:10:37.324 Max Number of Namespaces: 256 00:10:37.324 Max Number of I/O Queues: 64 00:10:37.324 NVMe Specification Version (VS): 1.4 00:10:37.324 NVMe Specification Version (Identify): 1.4 00:10:37.324 Maximum Queue Entries: 2048 00:10:37.324 Contiguous Queues Required: Yes 00:10:37.324 Arbitration Mechanisms Supported 00:10:37.324 Weighted Round Robin: Not Supported 00:10:37.324 Vendor Specific: Not Supported 00:10:37.324 Reset Timeout: 7500 ms 00:10:37.324 Doorbell Stride: 4 bytes 00:10:37.324 NVM Subsystem Reset: Not Supported 00:10:37.324 Command Sets Supported 00:10:37.324 NVM Command Set: Supported 00:10:37.324 Boot Partition: Not Supported 00:10:37.324 Memory Page Size Minimum: 4096 bytes 00:10:37.324 Memory Page Size Maximum: 65536 bytes 00:10:37.324 Persistent Memory Region: Not Supported 00:10:37.324 Optional Asynchronous Events Supported 00:10:37.324 Namespace Attribute Notices: Supported 00:10:37.324 Firmware Activation Notices: Not Supported 00:10:37.324 ANA Change Notices: Not Supported 00:10:37.324 PLE Aggregate Log Change Notices: Not Supported 00:10:37.324 LBA Status Info Alert Notices: Not Supported 00:10:37.324 EGE Aggregate Log Change Notices: Not Supported 00:10:37.324 Normal NVM Subsystem Shutdown event: Not Supported 00:10:37.324 Zone Descriptor Change Notices: Not Supported 00:10:37.324 Discovery Log Change Notices: Not Supported 00:10:37.324 Controller Attributes 00:10:37.324 128-bit Host Identifier: Not Supported 00:10:37.324 Non-Operational Permissive Mode: Not Supported 00:10:37.324 NVM Sets: Not Supported 00:10:37.324 Read Recovery Levels: Not Supported 00:10:37.324 Endurance Groups: Not Supported 00:10:37.324 Predictable Latency Mode: Not Supported 00:10:37.324 Traffic Based Keep ALive: Not Supported 00:10:37.324 Namespace Granularity: Not Supported 00:10:37.324 SQ Associations: Not Supported 00:10:37.325 UUID List: Not Supported 00:10:37.325 Multi-Domain Subsystem: Not Supported 00:10:37.325 Fixed Capacity Management: Not Supported 00:10:37.325 Variable Capacity Management: Not Supported 00:10:37.325 Delete Endurance Group: Not Supported 00:10:37.325 Delete NVM Set: Not Supported 00:10:37.325 Extended LBA Formats Supported: Supported 00:10:37.325 Flexible Data Placement Supported: Not Supported 00:10:37.325 00:10:37.325 Controller Memory Buffer Support 00:10:37.325 ================================ 00:10:37.325 Supported: No 00:10:37.325 00:10:37.325 Persistent Memory Region Support 00:10:37.325 ================================ 00:10:37.325 Supported: No 00:10:37.325 00:10:37.325 Admin Command Set Attributes 00:10:37.325 ============================ 00:10:37.325 Security Send/Receive: Not Supported 00:10:37.325 Format NVM: Supported 00:10:37.325 Firmware Activate/Download: Not Supported 00:10:37.325 Namespace Management: Supported 00:10:37.325 Device Self-Test: Not Supported 00:10:37.325 Directives: Supported 00:10:37.325 NVMe-MI: Not Supported 00:10:37.325 Virtualization Management: Not Supported 00:10:37.325 Doorbell Buffer Config: Supported 00:10:37.325 Get LBA Status Capability: Not Supported 00:10:37.325 Command & Feature Lockdown Capability: Not Supported 00:10:37.325 Abort Command Limit: 4 00:10:37.325 Async Event Request Limit: 4 00:10:37.325 Number of Firmware Slots: N/A 00:10:37.325 Firmware Slot 1 Read-Only: N/A 00:10:37.325 Firmware Activation Without Reset: N/A 00:10:37.325 Multiple Update Detection Support: N/A 00:10:37.325 Firmware Update Granularity: No Information Provided 00:10:37.325 Per-Namespace SMART Log: Yes 00:10:37.325 Asymmetric Namespace Access Log Page: Not Supported 00:10:37.325 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:37.325 Command Effects Log Page: Supported 00:10:37.325 Get Log Page Extended Data: Supported 00:10:37.325 Telemetry Log Pages: Not Supported 00:10:37.325 Persistent Event Log Pages: Not Supported 00:10:37.325 Supported Log Pages Log Page: May Support 00:10:37.325 Commands Supported & Effects Log Page: Not Supported 00:10:37.325 Feature Identifiers & Effects Log Page:May Support 00:10:37.325 NVMe-MI Commands & Effects Log Page: May Support 00:10:37.325 Data Area 4 for Telemetry Log: Not Supported 00:10:37.325 Error Log Page Entries Supported: 1 00:10:37.325 Keep Alive: Not Supported 00:10:37.325 00:10:37.325 NVM Command Set Attributes 00:10:37.325 ========================== 00:10:37.325 Submission Queue Entry Size 00:10:37.325 Max: 64 00:10:37.325 Min: 64 00:10:37.325 Completion Queue Entry Size 00:10:37.325 Max: 16 00:10:37.325 Min: 16 00:10:37.325 Number of Namespaces: 256 00:10:37.325 Compare Command: Supported 00:10:37.325 Write Uncorrectable Command: Not Supported 00:10:37.325 Dataset Management Command: Supported 00:10:37.325 Write Zeroes Command: Supported 00:10:37.325 Set Features Save Field: Supported 00:10:37.325 Reservations: Not Supported 00:10:37.325 Timestamp: Supported 00:10:37.325 Copy: Supported 00:10:37.325 Volatile Write Cache: Present 00:10:37.325 Atomic Write Unit (Normal): 1 00:10:37.325 Atomic Write Unit (PFail): 1 00:10:37.325 Atomic Compare & Write Unit: 1 00:10:37.325 Fused Compare & Write: Not Supported 00:10:37.325 Scatter-Gather List 00:10:37.325 SGL Command Set: Supported 00:10:37.325 SGL Keyed: Not Supported 00:10:37.325 SGL Bit Bucket Descriptor: Not Supported 00:10:37.325 SGL Metadata Pointer: Not Supported 00:10:37.325 Oversized SGL: Not Supported 00:10:37.325 SGL Metadata Address: Not Supported 00:10:37.325 SGL Offset: Not Supported 00:10:37.325 Transport SGL Data Block: Not Supported 00:10:37.325 Replay Protected Memory Block: Not Supported 00:10:37.325 00:10:37.325 Firmware Slot Information 00:10:37.325 ========================= 00:10:37.325 Active slot: 1 00:10:37.325 Slot 1 Firmware Revision: 1.0 00:10:37.325 00:10:37.325 00:10:37.325 Commands Supported and Effects 00:10:37.325 ============================== 00:10:37.325 Admin Commands 00:10:37.325 -------------- 00:10:37.325 Delete I/O Submission Queue (00h): Supported 00:10:37.325 Create I/O Submission Queue (01h): Supported 00:10:37.325 Get Log Page (02h): Supported 00:10:37.325 Delete I/O Completion Queue (04h): Supported 00:10:37.325 Create I/O Completion Queue (05h): Supported 00:10:37.325 Identify (06h): Supported 00:10:37.325 Abort (08h): Supported 00:10:37.325 Set Features (09h): Supported 00:10:37.325 Get Features (0Ah): Supported 00:10:37.325 Asynchronous Event Request (0Ch): Supported 00:10:37.325 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:37.325 Directive Send (19h): Supported 00:10:37.325 Directive Receive (1Ah): Supported 00:10:37.325 Virtualization Management (1Ch): Supported 00:10:37.325 Doorbell Buffer Config (7Ch): Supported 00:10:37.325 Format NVM (80h): Supported LBA-Change 00:10:37.325 I/O Commands 00:10:37.325 ------------ 00:10:37.325 Flush (00h): Supported LBA-Change 00:10:37.325 Write (01h): Supported LBA-Change 00:10:37.325 Read (02h): Supported 00:10:37.325 Compare (05h): Supported 00:10:37.325 Write Zeroes (08h): Supported LBA-Change 00:10:37.325 Dataset Management (09h): Supported LBA-Change 00:10:37.325 Unknown (0Ch): Supported 00:10:37.325 Unknown (12h): Supported 00:10:37.325 Copy (19h): Supported LBA-Change 00:10:37.325 Unknown (1Dh): Supported LBA-Change 00:10:37.325 00:10:37.325 Error Log 00:10:37.325 ========= 00:10:37.325 00:10:37.325 Arbitration 00:10:37.325 =========== 00:10:37.325 Arbitration Burst: no limit 00:10:37.325 00:10:37.325 Power Management 00:10:37.325 ================ 00:10:37.325 Number of Power States: 1 00:10:37.325 Current Power State: Power State #0 00:10:37.325 Power State #0: 00:10:37.325 Max Power: 25.00 W 00:10:37.325 Non-Operational State: Operational 00:10:37.325 Entry Latency: 16 microseconds 00:10:37.325 Exit Latency: 4 microseconds 00:10:37.325 Relative Read Throughput: 0 00:10:37.325 Relative Read Latency: 0 00:10:37.325 Relative Write Throughput: 0 00:10:37.325 Relative Write Latency: 0 00:10:37.325 Idle Power[2024-06-10 09:59:26.649495] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 69439 terminated unexpected 00:10:37.325 : Not Reported 00:10:37.325 Active Power: Not Reported 00:10:37.325 Non-Operational Permissive Mode: Not Supported 00:10:37.325 00:10:37.325 Health Information 00:10:37.325 ================== 00:10:37.325 Critical Warnings: 00:10:37.325 Available Spare Space: OK 00:10:37.325 Temperature: OK 00:10:37.325 Device Reliability: OK 00:10:37.325 Read Only: No 00:10:37.325 Volatile Memory Backup: OK 00:10:37.325 Current Temperature: 323 Kelvin (50 Celsius) 00:10:37.325 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:37.325 Available Spare: 0% 00:10:37.325 Available Spare Threshold: 0% 00:10:37.325 Life Percentage Used: 0% 00:10:37.325 Data Units Read: 1026 00:10:37.325 Data Units Written: 858 00:10:37.325 Host Read Commands: 48968 00:10:37.325 Host Write Commands: 47451 00:10:37.325 Controller Busy Time: 0 minutes 00:10:37.325 Power Cycles: 0 00:10:37.325 Power On Hours: 0 hours 00:10:37.325 Unsafe Shutdowns: 0 00:10:37.325 Unrecoverable Media Errors: 0 00:10:37.325 Lifetime Error Log Entries: 0 00:10:37.325 Warning Temperature Time: 0 minutes 00:10:37.325 Critical Temperature Time: 0 minutes 00:10:37.325 00:10:37.325 Number of Queues 00:10:37.325 ================ 00:10:37.325 Number of I/O Submission Queues: 64 00:10:37.325 Number of I/O Completion Queues: 64 00:10:37.325 00:10:37.325 ZNS Specific Controller Data 00:10:37.325 ============================ 00:10:37.325 Zone Append Size Limit: 0 00:10:37.325 00:10:37.325 00:10:37.325 Active Namespaces 00:10:37.325 ================= 00:10:37.325 Namespace ID:1 00:10:37.325 Error Recovery Timeout: Unlimited 00:10:37.325 Command Set Identifier: NVM (00h) 00:10:37.325 Deallocate: Supported 00:10:37.325 Deallocated/Unwritten Error: Supported 00:10:37.325 Deallocated Read Value: All 0x00 00:10:37.325 Deallocate in Write Zeroes: Not Supported 00:10:37.325 Deallocated Guard Field: 0xFFFF 00:10:37.325 Flush: Supported 00:10:37.325 Reservation: Not Supported 00:10:37.325 Metadata Transferred as: Separate Metadata Buffer 00:10:37.325 Namespace Sharing Capabilities: Private 00:10:37.325 Size (in LBAs): 1548666 (5GiB) 00:10:37.325 Capacity (in LBAs): 1548666 (5GiB) 00:10:37.325 Utilization (in LBAs): 1548666 (5GiB) 00:10:37.325 Thin Provisioning: Not Supported 00:10:37.325 Per-NS Atomic Units: No 00:10:37.325 Maximum Single Source Range Length: 128 00:10:37.325 Maximum Copy Length: 128 00:10:37.325 Maximum Source Range Count: 128 00:10:37.325 NGUID/EUI64 Never Reused: No 00:10:37.325 Namespace Write Protected: No 00:10:37.325 Number of LBA Formats: 8 00:10:37.325 Current LBA Format: LBA Format #07 00:10:37.325 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:37.325 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:37.325 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:37.325 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:37.325 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:37.325 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:37.325 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:37.325 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:37.325 00:10:37.325 ===================================================== 00:10:37.325 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:37.325 ===================================================== 00:10:37.325 Controller Capabilities/Features 00:10:37.325 ================================ 00:10:37.325 Vendor ID: 1b36 00:10:37.325 Subsystem Vendor ID: 1af4 00:10:37.325 Serial Number: 12341 00:10:37.325 Model Number: QEMU NVMe Ctrl 00:10:37.325 Firmware Version: 8.0.0 00:10:37.325 Recommended Arb Burst: 6 00:10:37.325 IEEE OUI Identifier: 00 54 52 00:10:37.325 Multi-path I/O 00:10:37.325 May have multiple subsystem ports: No 00:10:37.325 May have multiple controllers: No 00:10:37.325 Associated with SR-IOV VF: No 00:10:37.325 Max Data Transfer Size: 524288 00:10:37.325 Max Number of Namespaces: 256 00:10:37.325 Max Number of I/O Queues: 64 00:10:37.325 NVMe Specification Version (VS): 1.4 00:10:37.325 NVMe Specification Version (Identify): 1.4 00:10:37.325 Maximum Queue Entries: 2048 00:10:37.325 Contiguous Queues Required: Yes 00:10:37.325 Arbitration Mechanisms Supported 00:10:37.325 Weighted Round Robin: Not Supported 00:10:37.325 Vendor Specific: Not Supported 00:10:37.325 Reset Timeout: 7500 ms 00:10:37.325 Doorbell Stride: 4 bytes 00:10:37.325 NVM Subsystem Reset: Not Supported 00:10:37.325 Command Sets Supported 00:10:37.325 NVM Command Set: Supported 00:10:37.325 Boot Partition: Not Supported 00:10:37.325 Memory Page Size Minimum: 4096 bytes 00:10:37.325 Memory Page Size Maximum: 65536 bytes 00:10:37.325 Persistent Memory Region: Not Supported 00:10:37.325 Optional Asynchronous Events Supported 00:10:37.325 Namespace Attribute Notices: Supported 00:10:37.325 Firmware Activation Notices: Not Supported 00:10:37.325 ANA Change Notices: Not Supported 00:10:37.325 PLE Aggregate Log Change Notices: Not Supported 00:10:37.325 LBA Status Info Alert Notices: Not Supported 00:10:37.325 EGE Aggregate Log Change Notices: Not Supported 00:10:37.325 Normal NVM Subsystem Shutdown event: Not Supported 00:10:37.325 Zone Descriptor Change Notices: Not Supported 00:10:37.325 Discovery Log Change Notices: Not Supported 00:10:37.325 Controller Attributes 00:10:37.325 128-bit Host Identifier: Not Supported 00:10:37.325 Non-Operational Permissive Mode: Not Supported 00:10:37.325 NVM Sets: Not Supported 00:10:37.325 Read Recovery Levels: Not Supported 00:10:37.325 Endurance Groups: Not Supported 00:10:37.325 Predictable Latency Mode: Not Supported 00:10:37.325 Traffic Based Keep ALive: Not Supported 00:10:37.325 Namespace Granularity: Not Supported 00:10:37.325 SQ Associations: Not Supported 00:10:37.325 UUID List: Not Supported 00:10:37.325 Multi-Domain Subsystem: Not Supported 00:10:37.325 Fixed Capacity Management: Not Supported 00:10:37.325 Variable Capacity Management: Not Supported 00:10:37.325 Delete Endurance Group: Not Supported 00:10:37.325 Delete NVM Set: Not Supported 00:10:37.325 Extended LBA Formats Supported: Supported 00:10:37.325 Flexible Data Placement Supported: Not Supported 00:10:37.325 00:10:37.325 Controller Memory Buffer Support 00:10:37.325 ================================ 00:10:37.325 Supported: No 00:10:37.325 00:10:37.325 Persistent Memory Region Support 00:10:37.325 ================================ 00:10:37.325 Supported: No 00:10:37.326 00:10:37.326 Admin Command Set Attributes 00:10:37.326 ============================ 00:10:37.326 Security Send/Receive: Not Supported 00:10:37.326 Format NVM: Supported 00:10:37.326 Firmware Activate/Download: Not Supported 00:10:37.326 Namespace Management: Supported 00:10:37.326 Device Self-Test: Not Supported 00:10:37.326 Directives: Supported 00:10:37.326 NVMe-MI: Not Supported 00:10:37.326 Virtualization Management: Not Supported 00:10:37.326 Doorbell Buffer Config: Supported 00:10:37.326 Get LBA Status Capability: Not Supported 00:10:37.326 Command & Feature Lockdown Capability: Not Supported 00:10:37.326 Abort Command Limit: 4 00:10:37.326 Async Event Request Limit: 4 00:10:37.326 Number of Firmware Slots: N/A 00:10:37.326 Firmware Slot 1 Read-Only: N/A 00:10:37.326 Firmware Activation Without Reset: N/A 00:10:37.326 Multiple Update Detection Support: N/A 00:10:37.326 Firmware Update Granularity: No Information Provided 00:10:37.326 Per-Namespace SMART Log: Yes 00:10:37.326 Asymmetric Namespace Access Log Page: Not Supported 00:10:37.326 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:37.326 Command Effects Log Page: Supported 00:10:37.326 Get Log Page Extended Data: Supported 00:10:37.326 Telemetry Log Pages: Not Supported 00:10:37.326 Persistent Event Log Pages: Not Supported 00:10:37.326 Supported Log Pages Log Page: May Support 00:10:37.326 Commands Supported & Effects Log Page: Not Supported 00:10:37.326 Feature Identifiers & Effects Log Page:May Support 00:10:37.326 NVMe-MI Commands & Effects Log Page: May Support 00:10:37.326 Data Area 4 for Telemetry Log: Not Supported 00:10:37.326 Error Log Page Entries Supported: 1 00:10:37.326 Keep Alive: Not Supported 00:10:37.326 00:10:37.326 NVM Command Set Attributes 00:10:37.326 ========================== 00:10:37.326 Submission Queue Entry Size 00:10:37.326 Max: 64 00:10:37.326 Min: 64 00:10:37.326 Completion Queue Entry Size 00:10:37.326 Max: 16 00:10:37.326 Min: 16 00:10:37.326 Number of Namespaces: 256 00:10:37.326 Compare Command: Supported 00:10:37.326 Write Uncorrectable Command: Not Supported 00:10:37.326 Dataset Management Command: Supported 00:10:37.326 Write Zeroes Command: Supported 00:10:37.326 Set Features Save Field: Supported 00:10:37.326 Reservations: Not Supported 00:10:37.326 Timestamp: Supported 00:10:37.326 Copy: Supported 00:10:37.326 Volatile Write Cache: Present 00:10:37.326 Atomic Write Unit (Normal): 1 00:10:37.326 Atomic Write Unit (PFail): 1 00:10:37.326 Atomic Compare & Write Unit: 1 00:10:37.326 Fused Compare & Write: Not Supported 00:10:37.326 Scatter-Gather List 00:10:37.326 SGL Command Set: Supported 00:10:37.326 SGL Keyed: Not Supported 00:10:37.326 SGL Bit Bucket Descriptor: Not Supported 00:10:37.326 SGL Metadata Pointer: Not Supported 00:10:37.326 Oversized SGL: Not Supported 00:10:37.326 SGL Metadata Address: Not Supported 00:10:37.326 SGL Offset: Not Supported 00:10:37.326 Transport SGL Data Block: Not Supported 00:10:37.326 Replay Protected Memory Block: Not Supported 00:10:37.326 00:10:37.326 Firmware Slot Information 00:10:37.326 ========================= 00:10:37.326 Active slot: 1 00:10:37.326 Slot 1 Firmware Revision: 1.0 00:10:37.326 00:10:37.326 00:10:37.326 Commands Supported and Effects 00:10:37.326 ============================== 00:10:37.326 Admin Commands 00:10:37.326 -------------- 00:10:37.326 Delete I/O Submission Queue (00h): Supported 00:10:37.326 Create I/O Submission Queue (01h): Supported 00:10:37.326 Get Log Page (02h): Supported 00:10:37.326 Delete I/O Completion Queue (04h): Supported 00:10:37.326 Create I/O Completion Queue (05h): Supported 00:10:37.326 Identify (06h): Supported 00:10:37.326 Abort (08h): Supported 00:10:37.326 Set Features (09h): Supported 00:10:37.326 Get Features (0Ah): Supported 00:10:37.326 Asynchronous Event Request (0Ch): Supported 00:10:37.326 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:37.326 Directive Send (19h): Supported 00:10:37.326 Directive Receive (1Ah): Supported 00:10:37.326 Virtualization Management (1Ch): Supported 00:10:37.326 Doorbell Buffer Config (7Ch): Supported 00:10:37.326 Format NVM (80h): Supported LBA-Change 00:10:37.326 I/O Commands 00:10:37.326 ------------ 00:10:37.326 Flush (00h): Supported LBA-Change 00:10:37.326 Write (01h): Supported LBA-Change 00:10:37.326 Read (02h): Supported 00:10:37.326 Compare (05h): Supported 00:10:37.326 Write Zeroes (08h): Supported LBA-Change 00:10:37.326 Dataset Management (09h): Supported LBA-Change 00:10:37.326 Unknown (0Ch): Supported 00:10:37.326 Unknown (12h): Supported 00:10:37.326 Copy (19h): Supported LBA-Change 00:10:37.326 Unknown (1Dh): Supported LBA-Change 00:10:37.326 00:10:37.326 Error Log 00:10:37.326 ========= 00:10:37.326 00:10:37.326 Arbitration 00:10:37.326 =========== 00:10:37.326 Arbitration Burst: no limit 00:10:37.326 00:10:37.326 Power Management 00:10:37.326 ================ 00:10:37.326 Number of Power States: 1 00:10:37.326 Current Power State: Power State #0 00:10:37.326 Power State #0: 00:10:37.326 Max Power: 25.00 W 00:10:37.326 Non-Operational State: Operational 00:10:37.326 Entry Latency: 16 microseconds 00:10:37.326 Exit Latency: 4 microseconds 00:10:37.326 Relative Read Throughput: 0 00:10:37.326 Relative Read Latency: 0 00:10:37.326 Relative Write Throughput: 0 00:10:37.326 Relative Write Latency: 0 00:10:37.326 Idle Power: Not Reported 00:10:37.326 Active Power: Not Reported 00:10:37.326 Non-Operational Permissive Mode: Not Supported 00:10:37.326 00:10:37.326 Health Information 00:10:37.326 ================== 00:10:37.326 Critical Warnings: 00:10:37.326 Available Spare Space: OK 00:10:37.326 Temperature: OK 00:10:37.326 Device Reliability: OK 00:10:37.326 Read Only: No 00:10:37.326 Volatile Memory Backup: OK 00:10:37.326 Current Temperature: 323 Kelvin (50 Celsius) 00:10:37.326 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:37.326 Available Spare: 0% 00:10:37.326 Available Spare Threshold: 0% 00:10:37.326 Life Percentage Used: 0% 00:10:37.326 Data Units Read: 731 00:10:37.326 Data Units Written: 580 00:10:37.326 Host Read Commands: 34515 00:10:37.326 Host Write Commands: 32242 00:10:37.326 Controller Busy Time: 0 minutes 00:10:37.326 Power Cycles: 0 00:10:37.326 Power On Hours: 0 hours 00:10:37.326 Unsafe Shutdowns: 0 00:10:37.326 Unrecoverable Media Errors: 0 00:10:37.326 Lifetime Error Log Entries: 0 00:10:37.326 Warning Temperature Time: 0 minutes 00:10:37.326 Critical Temperature Time: 0 minutes 00:10:37.326 00:10:37.326 Number of Queues 00:10:37.326 ================ 00:10:37.326 Number of I/O Submission Queues: 64 00:10:37.326 Number of I/O Completion Queues: 64 00:10:37.326 00:10:37.326 ZNS Specific Controller Data 00:10:37.326 ============================ 00:10:37.326 Zone Append Size Limit: 0 00:10:37.326 00:10:37.326 00:10:37.326 Active Namespaces 00:10:37.326 ================= 00:10:37.326 Namespace ID:1 00:10:37.326 Error Recovery Timeout: Unlimited 00:10:37.326 Command Set Identifier: [2024-06-10 09:59:26.650456] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 69439 terminated unexpected 00:10:37.326 NVM (00h) 00:10:37.326 Deallocate: Supported 00:10:37.326 Deallocated/Unwritten Error: Supported 00:10:37.326 Deallocated Read Value: All 0x00 00:10:37.326 Deallocate in Write Zeroes: Not Supported 00:10:37.326 Deallocated Guard Field: 0xFFFF 00:10:37.326 Flush: Supported 00:10:37.326 Reservation: Not Supported 00:10:37.326 Namespace Sharing Capabilities: Private 00:10:37.326 Size (in LBAs): 1310720 (5GiB) 00:10:37.326 Capacity (in LBAs): 1310720 (5GiB) 00:10:37.326 Utilization (in LBAs): 1310720 (5GiB) 00:10:37.326 Thin Provisioning: Not Supported 00:10:37.326 Per-NS Atomic Units: No 00:10:37.326 Maximum Single Source Range Length: 128 00:10:37.326 Maximum Copy Length: 128 00:10:37.326 Maximum Source Range Count: 128 00:10:37.326 NGUID/EUI64 Never Reused: No 00:10:37.326 Namespace Write Protected: No 00:10:37.326 Number of LBA Formats: 8 00:10:37.326 Current LBA Format: LBA Format #04 00:10:37.326 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:37.326 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:37.326 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:37.326 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:37.326 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:37.326 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:37.326 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:37.326 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:37.326 00:10:37.326 ===================================================== 00:10:37.326 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:37.326 ===================================================== 00:10:37.326 Controller Capabilities/Features 00:10:37.326 ================================ 00:10:37.326 Vendor ID: 1b36 00:10:37.326 Subsystem Vendor ID: 1af4 00:10:37.326 Serial Number: 12343 00:10:37.326 Model Number: QEMU NVMe Ctrl 00:10:37.326 Firmware Version: 8.0.0 00:10:37.326 Recommended Arb Burst: 6 00:10:37.326 IEEE OUI Identifier: 00 54 52 00:10:37.326 Multi-path I/O 00:10:37.326 May have multiple subsystem ports: No 00:10:37.326 May have multiple controllers: Yes 00:10:37.326 Associated with SR-IOV VF: No 00:10:37.326 Max Data Transfer Size: 524288 00:10:37.326 Max Number of Namespaces: 256 00:10:37.326 Max Number of I/O Queues: 64 00:10:37.326 NVMe Specification Version (VS): 1.4 00:10:37.326 NVMe Specification Version (Identify): 1.4 00:10:37.326 Maximum Queue Entries: 2048 00:10:37.326 Contiguous Queues Required: Yes 00:10:37.326 Arbitration Mechanisms Supported 00:10:37.326 Weighted Round Robin: Not Supported 00:10:37.326 Vendor Specific: Not Supported 00:10:37.326 Reset Timeout: 7500 ms 00:10:37.326 Doorbell Stride: 4 bytes 00:10:37.326 NVM Subsystem Reset: Not Supported 00:10:37.326 Command Sets Supported 00:10:37.326 NVM Command Set: Supported 00:10:37.326 Boot Partition: Not Supported 00:10:37.326 Memory Page Size Minimum: 4096 bytes 00:10:37.326 Memory Page Size Maximum: 65536 bytes 00:10:37.326 Persistent Memory Region: Not Supported 00:10:37.326 Optional Asynchronous Events Supported 00:10:37.326 Namespace Attribute Notices: Supported 00:10:37.326 Firmware Activation Notices: Not Supported 00:10:37.326 ANA Change Notices: Not Supported 00:10:37.326 PLE Aggregate Log Change Notices: Not Supported 00:10:37.326 LBA Status Info Alert Notices: Not Supported 00:10:37.326 EGE Aggregate Log Change Notices: Not Supported 00:10:37.326 Normal NVM Subsystem Shutdown event: Not Supported 00:10:37.326 Zone Descriptor Change Notices: Not Supported 00:10:37.326 Discovery Log Change Notices: Not Supported 00:10:37.326 Controller Attributes 00:10:37.326 128-bit Host Identifier: Not Supported 00:10:37.326 Non-Operational Permissive Mode: Not Supported 00:10:37.326 NVM Sets: Not Supported 00:10:37.326 Read Recovery Levels: Not Supported 00:10:37.326 Endurance Groups: Supported 00:10:37.326 Predictable Latency Mode: Not Supported 00:10:37.326 Traffic Based Keep ALive: Not Supported 00:10:37.326 Namespace Granularity: Not Supported 00:10:37.326 SQ Associations: Not Supported 00:10:37.326 UUID List: Not Supported 00:10:37.326 Multi-Domain Subsystem: Not Supported 00:10:37.326 Fixed Capacity Management: Not Supported 00:10:37.326 Variable Capacity Management: Not Supported 00:10:37.326 Delete Endurance Group: Not Supported 00:10:37.326 Delete NVM Set: Not Supported 00:10:37.326 Extended LBA Formats Supported: Supported 00:10:37.326 Flexible Data Placement Supported: Supported 00:10:37.326 00:10:37.326 Controller Memory Buffer Support 00:10:37.326 ================================ 00:10:37.326 Supported: No 00:10:37.326 00:10:37.326 Persistent Memory Region Support 00:10:37.326 ================================ 00:10:37.326 Supported: No 00:10:37.326 00:10:37.326 Admin Command Set Attributes 00:10:37.326 ============================ 00:10:37.326 Security Send/Receive: Not Supported 00:10:37.326 Format NVM: Supported 00:10:37.326 Firmware Activate/Download: Not Supported 00:10:37.326 Namespace Management: Supported 00:10:37.326 Device Self-Test: Not Supported 00:10:37.326 Directives: Supported 00:10:37.326 NVMe-MI: Not Supported 00:10:37.326 Virtualization Management: Not Supported 00:10:37.326 Doorbell Buffer Config: Supported 00:10:37.326 Get LBA Status Capability: Not Supported 00:10:37.326 Command & Feature Lockdown Capability: Not Supported 00:10:37.326 Abort Command Limit: 4 00:10:37.326 Async Event Request Limit: 4 00:10:37.326 Number of Firmware Slots: N/A 00:10:37.326 Firmware Slot 1 Read-Only: N/A 00:10:37.326 Firmware Activation Without Reset: N/A 00:10:37.326 Multiple Update Detection Support: N/A 00:10:37.326 Firmware Update Granularity: No Information Provided 00:10:37.326 Per-Namespace SMART Log: Yes 00:10:37.326 Asymmetric Namespace Access Log Page: Not Supported 00:10:37.326 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:37.326 Command Effects Log Page: Supported 00:10:37.327 Get Log Page Extended Data: Supported 00:10:37.327 Telemetry Log Pages: Not Supported 00:10:37.327 Persistent Event Log Pages: Not Supported 00:10:37.327 Supported Log Pages Log Page: May Support 00:10:37.327 Commands Supported & Effects Log Page: Not Supported 00:10:37.327 Feature Identifiers & Effects Log Page:May Support 00:10:37.327 NVMe-MI Commands & Effects Log Page: May Support 00:10:37.327 Data Area 4 for Telemetry Log: Not Supported 00:10:37.327 Error Log Page Entries Supported: 1 00:10:37.327 Keep Alive: Not Supported 00:10:37.327 00:10:37.327 NVM Command Set Attributes 00:10:37.327 ========================== 00:10:37.327 Submission Queue Entry Size 00:10:37.327 Max: 64 00:10:37.327 Min: 64 00:10:37.327 Completion Queue Entry Size 00:10:37.327 Max: 16 00:10:37.327 Min: 16 00:10:37.327 Number of Namespaces: 256 00:10:37.327 Compare Command: Supported 00:10:37.327 Write Uncorrectable Command: Not Supported 00:10:37.327 Dataset Management Command: Supported 00:10:37.327 Write Zeroes Command: Supported 00:10:37.327 Set Features Save Field: Supported 00:10:37.327 Reservations: Not Supported 00:10:37.327 Timestamp: Supported 00:10:37.327 Copy: Supported 00:10:37.327 Volatile Write Cache: Present 00:10:37.327 Atomic Write Unit (Normal): 1 00:10:37.327 Atomic Write Unit (PFail): 1 00:10:37.327 Atomic Compare & Write Unit: 1 00:10:37.327 Fused Compare & Write: Not Supported 00:10:37.327 Scatter-Gather List 00:10:37.327 SGL Command Set: Supported 00:10:37.327 SGL Keyed: Not Supported 00:10:37.327 SGL Bit Bucket Descriptor: Not Supported 00:10:37.327 SGL Metadata Pointer: Not Supported 00:10:37.327 Oversized SGL: Not Supported 00:10:37.327 SGL Metadata Address: Not Supported 00:10:37.327 SGL Offset: Not Supported 00:10:37.327 Transport SGL Data Block: Not Supported 00:10:37.327 Replay Protected Memory Block: Not Supported 00:10:37.327 00:10:37.327 Firmware Slot Information 00:10:37.327 ========================= 00:10:37.327 Active slot: 1 00:10:37.327 Slot 1 Firmware Revision: 1.0 00:10:37.327 00:10:37.327 00:10:37.327 Commands Supported and Effects 00:10:37.327 ============================== 00:10:37.327 Admin Commands 00:10:37.327 -------------- 00:10:37.327 Delete I/O Submission Queue (00h): Supported 00:10:37.327 Create I/O Submission Queue (01h): Supported 00:10:37.327 Get Log Page (02h): Supported 00:10:37.327 Delete I/O Completion Queue (04h): Supported 00:10:37.327 Create I/O Completion Queue (05h): Supported 00:10:37.327 Identify (06h): Supported 00:10:37.327 Abort (08h): Supported 00:10:37.327 Set Features (09h): Supported 00:10:37.327 Get Features (0Ah): Supported 00:10:37.327 Asynchronous Event Request (0Ch): Supported 00:10:37.327 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:37.327 Directive Send (19h): Supported 00:10:37.327 Directive Receive (1Ah): Supported 00:10:37.327 Virtualization Management (1Ch): Supported 00:10:37.327 Doorbell Buffer Config (7Ch): Supported 00:10:37.327 Format NVM (80h): Supported LBA-Change 00:10:37.327 I/O Commands 00:10:37.327 ------------ 00:10:37.327 Flush (00h): Supported LBA-Change 00:10:37.327 Write (01h): Supported LBA-Change 00:10:37.327 Read (02h): Supported 00:10:37.327 Compare (05h): Supported 00:10:37.327 Write Zeroes (08h): Supported LBA-Change 00:10:37.327 Dataset Management (09h): Supported LBA-Change 00:10:37.327 Unknown (0Ch): Supported 00:10:37.327 Unknown (12h): Supported 00:10:37.327 Copy (19h): Supported LBA-Change 00:10:37.327 Unknown (1Dh): Supported LBA-Change 00:10:37.327 00:10:37.327 Error Log 00:10:37.327 ========= 00:10:37.327 00:10:37.327 Arbitration 00:10:37.327 =========== 00:10:37.327 Arbitration Burst: no limit 00:10:37.327 00:10:37.327 Power Management 00:10:37.327 ================ 00:10:37.327 Number of Power States: 1 00:10:37.327 Current Power State: Power State #0 00:10:37.327 Power State #0: 00:10:37.327 Max Power: 25.00 W 00:10:37.327 Non-Operational State: Operational 00:10:37.327 Entry Latency: 16 microseconds 00:10:37.327 Exit Latency: 4 microseconds 00:10:37.327 Relative Read Throughput: 0 00:10:37.327 Relative Read Latency: 0 00:10:37.327 Relative Write Throughput: 0 00:10:37.327 Relative Write Latency: 0 00:10:37.327 Idle Power: Not Reported 00:10:37.327 Active Power: Not Reported 00:10:37.327 Non-Operational Permissive Mode: Not Supported 00:10:37.327 00:10:37.327 Health Information 00:10:37.327 ================== 00:10:37.327 Critical Warnings: 00:10:37.327 Available Spare Space: OK 00:10:37.327 Temperature: OK 00:10:37.327 Device Reliability: OK 00:10:37.327 Read Only: No 00:10:37.327 Volatile Memory Backup: OK 00:10:37.327 Current Temperature: 323 Kelvin (50 Celsius) 00:10:37.327 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:37.327 Available Spare: 0% 00:10:37.327 Available Spare Threshold: 0% 00:10:37.327 Life Percentage Used: 0% 00:10:37.327 Data Units Read: 796 00:10:37.327 Data Units Written: 689 00:10:37.327 Host Read Commands: 34630 00:10:37.327 Host Write Commands: 33220 00:10:37.327 Controller Busy Time: 0 minutes 00:10:37.327 Power Cycles: 0 00:10:37.327 Power On Hours: 0 hours 00:10:37.327 Unsafe Shutdowns: 0 00:10:37.327 Unrecoverable Media Errors: 0 00:10:37.327 Lifetime Error Log Entries: 0 00:10:37.327 Warning Temperature Time: 0 minutes 00:10:37.327 Critical Temperature Time: 0 minutes 00:10:37.327 00:10:37.327 Number of Queues 00:10:37.327 ================ 00:10:37.327 Number of I/O Submission Queues: 64 00:10:37.327 Number of I/O Completion Queues: 64 00:10:37.327 00:10:37.327 ZNS Specific Controller Data 00:10:37.327 ============================ 00:10:37.327 Zone Append Size Limit: 0 00:10:37.327 00:10:37.327 00:10:37.327 Active Namespaces 00:10:37.327 ================= 00:10:37.327 Namespace ID:1 00:10:37.327 Error Recovery Timeout: Unlimited 00:10:37.327 Command Set Identifier: NVM (00h) 00:10:37.327 Deallocate: Supported 00:10:37.327 Deallocated/Unwritten Error: Supported 00:10:37.327 Deallocated Read Value: All 0x00 00:10:37.327 Deallocate in Write Zeroes: Not Supported 00:10:37.327 Deallocated Guard Field: 0xFFFF 00:10:37.327 Flush: Supported 00:10:37.327 Reservation: Not Supported 00:10:37.327 Namespace Sharing Capabilities: Multiple Controllers 00:10:37.327 Size (in LBAs): 262144 (1GiB) 00:10:37.327 Capacity (in LBAs): 262144 (1GiB) 00:10:37.327 Utilization (in LBAs): 262144 (1GiB) 00:10:37.327 Thin Provisioning: Not Supported 00:10:37.327 Per-NS Atomic Units: No 00:10:37.327 Maximum Single Source Range Length: 128 00:10:37.327 Maximum Copy Length: 128 00:10:37.327 Maximum Source Range Count: 128 00:10:37.327 NGUID/EUI64 Never Reused: No 00:10:37.327 Namespace Write Protected: No 00:10:37.327 Endurance group ID: 1 00:10:37.327 Number of LBA Formats: 8 00:10:37.327 Current LBA Format: LBA Format #04 00:10:37.327 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:37.327 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:37.327 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:37.327 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:37.327 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:37.327 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:37.327 LBA Format #06: Data Size[2024-06-10 09:59:26.652403] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 69439 terminated unexpected 00:10:37.327 : 4096 Metadata Size: 16 00:10:37.327 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:37.327 00:10:37.327 Get Feature FDP: 00:10:37.327 ================ 00:10:37.327 Enabled: Yes 00:10:37.327 FDP configuration index: 0 00:10:37.327 00:10:37.327 FDP configurations log page 00:10:37.327 =========================== 00:10:37.327 Number of FDP configurations: 1 00:10:37.327 Version: 0 00:10:37.327 Size: 112 00:10:37.327 FDP Configuration Descriptor: 0 00:10:37.327 Descriptor Size: 96 00:10:37.327 Reclaim Group Identifier format: 2 00:10:37.327 FDP Volatile Write Cache: Not Present 00:10:37.327 FDP Configuration: Valid 00:10:37.327 Vendor Specific Size: 0 00:10:37.327 Number of Reclaim Groups: 2 00:10:37.327 Number of Recalim Unit Handles: 8 00:10:37.327 Max Placement Identifiers: 128 00:10:37.327 Number of Namespaces Suppprted: 256 00:10:37.327 Reclaim unit Nominal Size: 6000000 bytes 00:10:37.327 Estimated Reclaim Unit Time Limit: Not Reported 00:10:37.327 RUH Desc #000: RUH Type: Initially Isolated 00:10:37.327 RUH Desc #001: RUH Type: Initially Isolated 00:10:37.327 RUH Desc #002: RUH Type: Initially Isolated 00:10:37.327 RUH Desc #003: RUH Type: Initially Isolated 00:10:37.327 RUH Desc #004: RUH Type: Initially Isolated 00:10:37.327 RUH Desc #005: RUH Type: Initially Isolated 00:10:37.327 RUH Desc #006: RUH Type: Initially Isolated 00:10:37.327 RUH Desc #007: RUH Type: Initially Isolated 00:10:37.327 00:10:37.327 FDP reclaim unit handle usage log page 00:10:37.327 ====================================== 00:10:37.327 Number of Reclaim Unit Handles: 8 00:10:37.327 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:37.327 RUH Usage Desc #001: RUH Attributes: Unused 00:10:37.327 RUH Usage Desc #002: RUH Attributes: Unused 00:10:37.327 RUH Usage Desc #003: RUH Attributes: Unused 00:10:37.327 RUH Usage Desc #004: RUH Attributes: Unused 00:10:37.327 RUH Usage Desc #005: RUH Attributes: Unused 00:10:37.327 RUH Usage Desc #006: RUH Attributes: Unused 00:10:37.327 RUH Usage Desc #007: RUH Attributes: Unused 00:10:37.327 00:10:37.327 FDP statistics log page 00:10:37.327 ======================= 00:10:37.327 Host bytes with metadata written: 428711936 00:10:37.327 Media bytes with metadata written: 428777472 00:10:37.327 Media bytes erased: 0 00:10:37.327 00:10:37.327 FDP events log page 00:10:37.327 =================== 00:10:37.327 Number of FDP events: 0 00:10:37.327 00:10:37.327 ===================================================== 00:10:37.327 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:37.327 ===================================================== 00:10:37.327 Controller Capabilities/Features 00:10:37.327 ================================ 00:10:37.327 Vendor ID: 1b36 00:10:37.327 Subsystem Vendor ID: 1af4 00:10:37.327 Serial Number: 12342 00:10:37.327 Model Number: QEMU NVMe Ctrl 00:10:37.327 Firmware Version: 8.0.0 00:10:37.327 Recommended Arb Burst: 6 00:10:37.327 IEEE OUI Identifier: 00 54 52 00:10:37.327 Multi-path I/O 00:10:37.327 May have multiple subsystem ports: No 00:10:37.327 May have multiple controllers: No 00:10:37.327 Associated with SR-IOV VF: No 00:10:37.327 Max Data Transfer Size: 524288 00:10:37.327 Max Number of Namespaces: 256 00:10:37.327 Max Number of I/O Queues: 64 00:10:37.327 NVMe Specification Version (VS): 1.4 00:10:37.327 NVMe Specification Version (Identify): 1.4 00:10:37.327 Maximum Queue Entries: 2048 00:10:37.327 Contiguous Queues Required: Yes 00:10:37.327 Arbitration Mechanisms Supported 00:10:37.327 Weighted Round Robin: Not Supported 00:10:37.327 Vendor Specific: Not Supported 00:10:37.327 Reset Timeout: 7500 ms 00:10:37.327 Doorbell Stride: 4 bytes 00:10:37.327 NVM Subsystem Reset: Not Supported 00:10:37.327 Command Sets Supported 00:10:37.327 NVM Command Set: Supported 00:10:37.327 Boot Partition: Not Supported 00:10:37.327 Memory Page Size Minimum: 4096 bytes 00:10:37.327 Memory Page Size Maximum: 65536 bytes 00:10:37.327 Persistent Memory Region: Not Supported 00:10:37.327 Optional Asynchronous Events Supported 00:10:37.327 Namespace Attribute Notices: Supported 00:10:37.327 Firmware Activation Notices: Not Supported 00:10:37.327 ANA Change Notices: Not Supported 00:10:37.327 PLE Aggregate Log Change Notices: Not Supported 00:10:37.327 LBA Status Info Alert Notices: Not Supported 00:10:37.327 EGE Aggregate Log Change Notices: Not Supported 00:10:37.327 Normal NVM Subsystem Shutdown event: Not Supported 00:10:37.327 Zone Descriptor Change Notices: Not Supported 00:10:37.327 Discovery Log Change Notices: Not Supported 00:10:37.327 Controller Attributes 00:10:37.327 128-bit Host Identifier: Not Supported 00:10:37.327 Non-Operational Permissive Mode: Not Supported 00:10:37.327 NVM Sets: Not Supported 00:10:37.327 Read Recovery Levels: Not Supported 00:10:37.327 Endurance Groups: Not Supported 00:10:37.327 Predictable Latency Mode: Not Supported 00:10:37.327 Traffic Based Keep ALive: Not Supported 00:10:37.327 Namespace Granularity: Not Supported 00:10:37.327 SQ Associations: Not Supported 00:10:37.327 UUID List: Not Supported 00:10:37.327 Multi-Domain Subsystem: Not Supported 00:10:37.327 Fixed Capacity Management: Not Supported 00:10:37.327 Variable Capacity Management: Not Supported 00:10:37.327 Delete Endurance Group: Not Supported 00:10:37.327 Delete NVM Set: Not Supported 00:10:37.327 Extended LBA Formats Supported: Supported 00:10:37.327 Flexible Data Placement Supported: Not Supported 00:10:37.327 00:10:37.327 Controller Memory Buffer Support 00:10:37.327 ================================ 00:10:37.328 Supported: No 00:10:37.328 00:10:37.328 Persistent Memory Region Support 00:10:37.328 ================================ 00:10:37.328 Supported: No 00:10:37.328 00:10:37.328 Admin Command Set Attributes 00:10:37.328 ============================ 00:10:37.328 Security Send/Receive: Not Supported 00:10:37.328 Format NVM: Supported 00:10:37.328 Firmware Activate/Download: Not Supported 00:10:37.328 Namespace Management: Supported 00:10:37.328 Device Self-Test: Not Supported 00:10:37.328 Directives: Supported 00:10:37.328 NVMe-MI: Not Supported 00:10:37.328 Virtualization Management: Not Supported 00:10:37.328 Doorbell Buffer Config: Supported 00:10:37.328 Get LBA Status Capability: Not Supported 00:10:37.328 Command & Feature Lockdown Capability: Not Supported 00:10:37.328 Abort Command Limit: 4 00:10:37.328 Async Event Request Limit: 4 00:10:37.328 Number of Firmware Slots: N/A 00:10:37.328 Firmware Slot 1 Read-Only: N/A 00:10:37.328 Firmware Activation Without Reset: N/A 00:10:37.328 Multiple Update Detection Support: N/A 00:10:37.328 Firmware Update Granularity: No Information Provided 00:10:37.328 Per-Namespace SMART Log: Yes 00:10:37.328 Asymmetric Namespace Access Log Page: Not Supported 00:10:37.328 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:37.328 Command Effects Log Page: Supported 00:10:37.328 Get Log Page Extended Data: Supported 00:10:37.328 Telemetry Log Pages: Not Supported 00:10:37.328 Persistent Event Log Pages: Not Supported 00:10:37.328 Supported Log Pages Log Page: May Support 00:10:37.328 Commands Supported & Effects Log Page: Not Supported 00:10:37.328 Feature Identifiers & Effects Log Page:May Support 00:10:37.328 NVMe-MI Commands & Effects Log Page: May Support 00:10:37.328 Data Area 4 for Telemetry Log: Not Supported 00:10:37.328 Error Log Page Entries Supported: 1 00:10:37.328 Keep Alive: Not Supported 00:10:37.328 00:10:37.328 NVM Command Set Attributes 00:10:37.328 ========================== 00:10:37.328 Submission Queue Entry Size 00:10:37.328 Max: 64 00:10:37.328 Min: 64 00:10:37.328 Completion Queue Entry Size 00:10:37.328 Max: 16 00:10:37.328 Min: 16 00:10:37.328 Number of Namespaces: 256 00:10:37.328 Compare Command: Supported 00:10:37.328 Write Uncorrectable Command: Not Supported 00:10:37.328 Dataset Management Command: Supported 00:10:37.328 Write Zeroes Command: Supported 00:10:37.328 Set Features Save Field: Supported 00:10:37.328 Reservations: Not Supported 00:10:37.328 Timestamp: Supported 00:10:37.328 Copy: Supported 00:10:37.328 Volatile Write Cache: Present 00:10:37.328 Atomic Write Unit (Normal): 1 00:10:37.328 Atomic Write Unit (PFail): 1 00:10:37.328 Atomic Compare & Write Unit: 1 00:10:37.328 Fused Compare & Write: Not Supported 00:10:37.328 Scatter-Gather List 00:10:37.328 SGL Command Set: Supported 00:10:37.328 SGL Keyed: Not Supported 00:10:37.328 SGL Bit Bucket Descriptor: Not Supported 00:10:37.328 SGL Metadata Pointer: Not Supported 00:10:37.328 Oversized SGL: Not Supported 00:10:37.328 SGL Metadata Address: Not Supported 00:10:37.328 SGL Offset: Not Supported 00:10:37.328 Transport SGL Data Block: Not Supported 00:10:37.328 Replay Protected Memory Block: Not Supported 00:10:37.328 00:10:37.328 Firmware Slot Information 00:10:37.328 ========================= 00:10:37.328 Active slot: 1 00:10:37.328 Slot 1 Firmware Revision: 1.0 00:10:37.328 00:10:37.328 00:10:37.328 Commands Supported and Effects 00:10:37.328 ============================== 00:10:37.328 Admin Commands 00:10:37.328 -------------- 00:10:37.328 Delete I/O Submission Queue (00h): Supported 00:10:37.328 Create I/O Submission Queue (01h): Supported 00:10:37.328 Get Log Page (02h): Supported 00:10:37.328 Delete I/O Completion Queue (04h): Supported 00:10:37.328 Create I/O Completion Queue (05h): Supported 00:10:37.328 Identify (06h): Supported 00:10:37.328 Abort (08h): Supported 00:10:37.328 Set Features (09h): Supported 00:10:37.328 Get Features (0Ah): Supported 00:10:37.328 Asynchronous Event Request (0Ch): Supported 00:10:37.328 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:37.328 Directive Send (19h): Supported 00:10:37.328 Directive Receive (1Ah): Supported 00:10:37.328 Virtualization Management (1Ch): Supported 00:10:37.328 Doorbell Buffer Config (7Ch): Supported 00:10:37.328 Format NVM (80h): Supported LBA-Change 00:10:37.328 I/O Commands 00:10:37.328 ------------ 00:10:37.328 Flush (00h): Supported LBA-Change 00:10:37.328 Write (01h): Supported LBA-Change 00:10:37.328 Read (02h): Supported 00:10:37.328 Compare (05h): Supported 00:10:37.328 Write Zeroes (08h): Supported LBA-Change 00:10:37.328 Dataset Management (09h): Supported LBA-Change 00:10:37.328 Unknown (0Ch): Supported 00:10:37.328 Unknown (12h): Supported 00:10:37.328 Copy (19h): Supported LBA-Change 00:10:37.328 Unknown (1Dh): Supported LBA-Change 00:10:37.328 00:10:37.328 Error Log 00:10:37.328 ========= 00:10:37.328 00:10:37.328 Arbitration 00:10:37.328 =========== 00:10:37.328 Arbitration Burst: no limit 00:10:37.328 00:10:37.328 Power Management 00:10:37.328 ================ 00:10:37.328 Number of Power States: 1 00:10:37.328 Current Power State: Power State #0 00:10:37.328 Power State #0: 00:10:37.328 Max Power: 25.00 W 00:10:37.328 Non-Operational State: Operational 00:10:37.328 Entry Latency: 16 microseconds 00:10:37.328 Exit Latency: 4 microseconds 00:10:37.328 Relative Read Throughput: 0 00:10:37.328 Relative Read Latency: 0 00:10:37.328 Relative Write Throughput: 0 00:10:37.328 Relative Write Latency: 0 00:10:37.328 Idle Power: Not Reported 00:10:37.328 Active Power: Not Reported 00:10:37.328 Non-Operational Permissive Mode: Not Supported 00:10:37.328 00:10:37.328 Health Information 00:10:37.328 ================== 00:10:37.328 Critical Warnings: 00:10:37.328 Available Spare Space: OK 00:10:37.328 Temperature: OK 00:10:37.328 Device Reliability: OK 00:10:37.328 Read Only: No 00:10:37.328 Volatile Memory Backup: OK 00:10:37.328 Current Temperature: 323 Kelvin (50 Celsius) 00:10:37.328 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:37.328 Available Spare: 0% 00:10:37.328 Available Spare Threshold: 0% 00:10:37.328 Life Percentage Used: 0% 00:10:37.328 Data Units Read: 2194 00:10:37.328 Data Units Written: 1875 00:10:37.328 Host Read Commands: 102148 00:10:37.328 Host Write Commands: 97918 00:10:37.328 Controller Busy Time: 0 minutes 00:10:37.328 Power Cycles: 0 00:10:37.328 Power On Hours: 0 hours 00:10:37.328 Unsafe Shutdowns: 0 00:10:37.328 Unrecoverable Media Errors: 0 00:10:37.328 Lifetime Error Log Entries: 0 00:10:37.328 Warning Temperature Time: 0 minutes 00:10:37.328 Critical Temperature Time: 0 minutes 00:10:37.328 00:10:37.328 Number of Queues 00:10:37.328 ================ 00:10:37.328 Number of I/O Submission Queues: 64 00:10:37.328 Number of I/O Completion Queues: 64 00:10:37.328 00:10:37.328 ZNS Specific Controller Data 00:10:37.328 ============================ 00:10:37.328 Zone Append Size Limit: 0 00:10:37.328 00:10:37.328 00:10:37.328 Active Namespaces 00:10:37.328 ================= 00:10:37.328 Namespace ID:1 00:10:37.328 Error Recovery Timeout: Unlimited 00:10:37.328 Command Set Identifier: NVM (00h) 00:10:37.328 Deallocate: Supported 00:10:37.328 Deallocated/Unwritten Error: Supported 00:10:37.328 Deallocated Read Value: All 0x00 00:10:37.328 Deallocate in Write Zeroes: Not Supported 00:10:37.328 Deallocated Guard Field: 0xFFFF 00:10:37.328 Flush: Supported 00:10:37.328 Reservation: Not Supported 00:10:37.328 Namespace Sharing Capabilities: Private 00:10:37.328 Size (in LBAs): 1048576 (4GiB) 00:10:37.328 Capacity (in LBAs): 1048576 (4GiB) 00:10:37.328 Utilization (in LBAs): 1048576 (4GiB) 00:10:37.328 Thin Provisioning: Not Supported 00:10:37.328 Per-NS Atomic Units: No 00:10:37.328 Maximum Single Source Range Length: 128 00:10:37.328 Maximum Copy Length: 128 00:10:37.328 Maximum Source Range Count: 128 00:10:37.328 NGUID/EUI64 Never Reused: No 00:10:37.328 Namespace Write Protected: No 00:10:37.328 Number of LBA Formats: 8 00:10:37.328 Current LBA Format: LBA Format #04 00:10:37.328 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:37.328 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:37.328 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:37.328 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:37.328 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:37.328 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:37.328 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:37.328 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:37.328 00:10:37.328 Namespace ID:2 00:10:37.328 Error Recovery Timeout: Unlimited 00:10:37.328 Command Set Identifier: NVM (00h) 00:10:37.328 Deallocate: Supported 00:10:37.328 Deallocated/Unwritten Error: Supported 00:10:37.328 Deallocated Read Value: All 0x00 00:10:37.328 Deallocate in Write Zeroes: Not Supported 00:10:37.328 Deallocated Guard Field: 0xFFFF 00:10:37.328 Flush: Supported 00:10:37.328 Reservation: Not Supported 00:10:37.328 Namespace Sharing Capabilities: Private 00:10:37.328 Size (in LBAs): 1048576 (4GiB) 00:10:37.328 Capacity (in LBAs): 1048576 (4GiB) 00:10:37.328 Utilization (in LBAs): 1048576 (4GiB) 00:10:37.328 Thin Provisioning: Not Supported 00:10:37.328 Per-NS Atomic Units: No 00:10:37.328 Maximum Single Source Range Length: 128 00:10:37.328 Maximum Copy Length: 128 00:10:37.328 Maximum Source Range Count: 128 00:10:37.328 NGUID/EUI64 Never Reused: No 00:10:37.328 Namespace Write Protected: No 00:10:37.328 Number of LBA Formats: 8 00:10:37.328 Current LBA Format: LBA Format #04 00:10:37.328 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:37.328 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:37.328 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:37.328 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:37.328 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:37.328 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:37.328 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:37.328 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:37.328 00:10:37.328 Namespace ID:3 00:10:37.328 Error Recovery Timeout: Unlimited 00:10:37.328 Command Set Identifier: NVM (00h) 00:10:37.328 Deallocate: Supported 00:10:37.328 Deallocated/Unwritten Error: Supported 00:10:37.328 Deallocated Read Value: All 0x00 00:10:37.328 Deallocate in Write Zeroes: Not Supported 00:10:37.328 Deallocated Guard Field: 0xFFFF 00:10:37.328 Flush: Supported 00:10:37.328 Reservation: Not Supported 00:10:37.328 Namespace Sharing Capabilities: Private 00:10:37.328 Size (in LBAs): 1048576 (4GiB) 00:10:37.328 Capacity (in LBAs): 1048576 (4GiB) 00:10:37.328 Utilization (in LBAs): 1048576 (4GiB) 00:10:37.328 Thin Provisioning: Not Supported 00:10:37.328 Per-NS Atomic Units: No 00:10:37.328 Maximum Single Source Range Length: 128 00:10:37.328 Maximum Copy Length: 128 00:10:37.328 Maximum Source Range Count: 128 00:10:37.329 NGUID/EUI64 Never Reused: No 00:10:37.329 Namespace Write Protected: No 00:10:37.329 Number of LBA Formats: 8 00:10:37.329 Current LBA Format: LBA Format #04 00:10:37.329 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:37.329 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:37.329 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:37.329 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:37.329 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:37.329 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:37.329 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:37.329 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:37.329 00:10:37.329 09:59:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:37.329 09:59:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:37.587 ===================================================== 00:10:37.587 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:37.587 ===================================================== 00:10:37.587 Controller Capabilities/Features 00:10:37.587 ================================ 00:10:37.587 Vendor ID: 1b36 00:10:37.587 Subsystem Vendor ID: 1af4 00:10:37.587 Serial Number: 12340 00:10:37.587 Model Number: QEMU NVMe Ctrl 00:10:37.587 Firmware Version: 8.0.0 00:10:37.587 Recommended Arb Burst: 6 00:10:37.587 IEEE OUI Identifier: 00 54 52 00:10:37.587 Multi-path I/O 00:10:37.587 May have multiple subsystem ports: No 00:10:37.587 May have multiple controllers: No 00:10:37.587 Associated with SR-IOV VF: No 00:10:37.587 Max Data Transfer Size: 524288 00:10:37.587 Max Number of Namespaces: 256 00:10:37.587 Max Number of I/O Queues: 64 00:10:37.587 NVMe Specification Version (VS): 1.4 00:10:37.587 NVMe Specification Version (Identify): 1.4 00:10:37.587 Maximum Queue Entries: 2048 00:10:37.587 Contiguous Queues Required: Yes 00:10:37.587 Arbitration Mechanisms Supported 00:10:37.587 Weighted Round Robin: Not Supported 00:10:37.587 Vendor Specific: Not Supported 00:10:37.587 Reset Timeout: 7500 ms 00:10:37.587 Doorbell Stride: 4 bytes 00:10:37.587 NVM Subsystem Reset: Not Supported 00:10:37.587 Command Sets Supported 00:10:37.587 NVM Command Set: Supported 00:10:37.587 Boot Partition: Not Supported 00:10:37.587 Memory Page Size Minimum: 4096 bytes 00:10:37.587 Memory Page Size Maximum: 65536 bytes 00:10:37.587 Persistent Memory Region: Not Supported 00:10:37.587 Optional Asynchronous Events Supported 00:10:37.587 Namespace Attribute Notices: Supported 00:10:37.587 Firmware Activation Notices: Not Supported 00:10:37.587 ANA Change Notices: Not Supported 00:10:37.587 PLE Aggregate Log Change Notices: Not Supported 00:10:37.587 LBA Status Info Alert Notices: Not Supported 00:10:37.587 EGE Aggregate Log Change Notices: Not Supported 00:10:37.587 Normal NVM Subsystem Shutdown event: Not Supported 00:10:37.587 Zone Descriptor Change Notices: Not Supported 00:10:37.587 Discovery Log Change Notices: Not Supported 00:10:37.587 Controller Attributes 00:10:37.587 128-bit Host Identifier: Not Supported 00:10:37.587 Non-Operational Permissive Mode: Not Supported 00:10:37.587 NVM Sets: Not Supported 00:10:37.587 Read Recovery Levels: Not Supported 00:10:37.587 Endurance Groups: Not Supported 00:10:37.587 Predictable Latency Mode: Not Supported 00:10:37.587 Traffic Based Keep ALive: Not Supported 00:10:37.587 Namespace Granularity: Not Supported 00:10:37.587 SQ Associations: Not Supported 00:10:37.587 UUID List: Not Supported 00:10:37.587 Multi-Domain Subsystem: Not Supported 00:10:37.587 Fixed Capacity Management: Not Supported 00:10:37.587 Variable Capacity Management: Not Supported 00:10:37.587 Delete Endurance Group: Not Supported 00:10:37.587 Delete NVM Set: Not Supported 00:10:37.587 Extended LBA Formats Supported: Supported 00:10:37.587 Flexible Data Placement Supported: Not Supported 00:10:37.587 00:10:37.587 Controller Memory Buffer Support 00:10:37.587 ================================ 00:10:37.587 Supported: No 00:10:37.587 00:10:37.587 Persistent Memory Region Support 00:10:37.587 ================================ 00:10:37.587 Supported: No 00:10:37.587 00:10:37.587 Admin Command Set Attributes 00:10:37.587 ============================ 00:10:37.587 Security Send/Receive: Not Supported 00:10:37.587 Format NVM: Supported 00:10:37.587 Firmware Activate/Download: Not Supported 00:10:37.587 Namespace Management: Supported 00:10:37.587 Device Self-Test: Not Supported 00:10:37.587 Directives: Supported 00:10:37.587 NVMe-MI: Not Supported 00:10:37.587 Virtualization Management: Not Supported 00:10:37.587 Doorbell Buffer Config: Supported 00:10:37.587 Get LBA Status Capability: Not Supported 00:10:37.587 Command & Feature Lockdown Capability: Not Supported 00:10:37.587 Abort Command Limit: 4 00:10:37.587 Async Event Request Limit: 4 00:10:37.587 Number of Firmware Slots: N/A 00:10:37.587 Firmware Slot 1 Read-Only: N/A 00:10:37.587 Firmware Activation Without Reset: N/A 00:10:37.587 Multiple Update Detection Support: N/A 00:10:37.587 Firmware Update Granularity: No Information Provided 00:10:37.588 Per-Namespace SMART Log: Yes 00:10:37.588 Asymmetric Namespace Access Log Page: Not Supported 00:10:37.588 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:37.588 Command Effects Log Page: Supported 00:10:37.588 Get Log Page Extended Data: Supported 00:10:37.588 Telemetry Log Pages: Not Supported 00:10:37.588 Persistent Event Log Pages: Not Supported 00:10:37.588 Supported Log Pages Log Page: May Support 00:10:37.588 Commands Supported & Effects Log Page: Not Supported 00:10:37.588 Feature Identifiers & Effects Log Page:May Support 00:10:37.588 NVMe-MI Commands & Effects Log Page: May Support 00:10:37.588 Data Area 4 for Telemetry Log: Not Supported 00:10:37.588 Error Log Page Entries Supported: 1 00:10:37.588 Keep Alive: Not Supported 00:10:37.588 00:10:37.588 NVM Command Set Attributes 00:10:37.588 ========================== 00:10:37.588 Submission Queue Entry Size 00:10:37.588 Max: 64 00:10:37.588 Min: 64 00:10:37.588 Completion Queue Entry Size 00:10:37.588 Max: 16 00:10:37.588 Min: 16 00:10:37.588 Number of Namespaces: 256 00:10:37.588 Compare Command: Supported 00:10:37.588 Write Uncorrectable Command: Not Supported 00:10:37.588 Dataset Management Command: Supported 00:10:37.588 Write Zeroes Command: Supported 00:10:37.588 Set Features Save Field: Supported 00:10:37.588 Reservations: Not Supported 00:10:37.588 Timestamp: Supported 00:10:37.588 Copy: Supported 00:10:37.588 Volatile Write Cache: Present 00:10:37.588 Atomic Write Unit (Normal): 1 00:10:37.588 Atomic Write Unit (PFail): 1 00:10:37.588 Atomic Compare & Write Unit: 1 00:10:37.588 Fused Compare & Write: Not Supported 00:10:37.588 Scatter-Gather List 00:10:37.588 SGL Command Set: Supported 00:10:37.588 SGL Keyed: Not Supported 00:10:37.588 SGL Bit Bucket Descriptor: Not Supported 00:10:37.588 SGL Metadata Pointer: Not Supported 00:10:37.588 Oversized SGL: Not Supported 00:10:37.588 SGL Metadata Address: Not Supported 00:10:37.588 SGL Offset: Not Supported 00:10:37.588 Transport SGL Data Block: Not Supported 00:10:37.588 Replay Protected Memory Block: Not Supported 00:10:37.588 00:10:37.588 Firmware Slot Information 00:10:37.588 ========================= 00:10:37.588 Active slot: 1 00:10:37.588 Slot 1 Firmware Revision: 1.0 00:10:37.588 00:10:37.588 00:10:37.588 Commands Supported and Effects 00:10:37.588 ============================== 00:10:37.588 Admin Commands 00:10:37.588 -------------- 00:10:37.588 Delete I/O Submission Queue (00h): Supported 00:10:37.588 Create I/O Submission Queue (01h): Supported 00:10:37.588 Get Log Page (02h): Supported 00:10:37.588 Delete I/O Completion Queue (04h): Supported 00:10:37.588 Create I/O Completion Queue (05h): Supported 00:10:37.588 Identify (06h): Supported 00:10:37.588 Abort (08h): Supported 00:10:37.588 Set Features (09h): Supported 00:10:37.588 Get Features (0Ah): Supported 00:10:37.588 Asynchronous Event Request (0Ch): Supported 00:10:37.588 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:37.588 Directive Send (19h): Supported 00:10:37.588 Directive Receive (1Ah): Supported 00:10:37.588 Virtualization Management (1Ch): Supported 00:10:37.588 Doorbell Buffer Config (7Ch): Supported 00:10:37.588 Format NVM (80h): Supported LBA-Change 00:10:37.588 I/O Commands 00:10:37.588 ------------ 00:10:37.588 Flush (00h): Supported LBA-Change 00:10:37.588 Write (01h): Supported LBA-Change 00:10:37.588 Read (02h): Supported 00:10:37.588 Compare (05h): Supported 00:10:37.588 Write Zeroes (08h): Supported LBA-Change 00:10:37.588 Dataset Management (09h): Supported LBA-Change 00:10:37.588 Unknown (0Ch): Supported 00:10:37.588 Unknown (12h): Supported 00:10:37.588 Copy (19h): Supported LBA-Change 00:10:37.588 Unknown (1Dh): Supported LBA-Change 00:10:37.588 00:10:37.588 Error Log 00:10:37.588 ========= 00:10:37.588 00:10:37.588 Arbitration 00:10:37.588 =========== 00:10:37.588 Arbitration Burst: no limit 00:10:37.588 00:10:37.588 Power Management 00:10:37.588 ================ 00:10:37.588 Number of Power States: 1 00:10:37.588 Current Power State: Power State #0 00:10:37.588 Power State #0: 00:10:37.588 Max Power: 25.00 W 00:10:37.588 Non-Operational State: Operational 00:10:37.588 Entry Latency: 16 microseconds 00:10:37.588 Exit Latency: 4 microseconds 00:10:37.588 Relative Read Throughput: 0 00:10:37.588 Relative Read Latency: 0 00:10:37.588 Relative Write Throughput: 0 00:10:37.588 Relative Write Latency: 0 00:10:37.588 Idle Power: Not Reported 00:10:37.588 Active Power: Not Reported 00:10:37.588 Non-Operational Permissive Mode: Not Supported 00:10:37.588 00:10:37.588 Health Information 00:10:37.588 ================== 00:10:37.588 Critical Warnings: 00:10:37.588 Available Spare Space: OK 00:10:37.588 Temperature: OK 00:10:37.588 Device Reliability: OK 00:10:37.588 Read Only: No 00:10:37.588 Volatile Memory Backup: OK 00:10:37.588 Current Temperature: 323 Kelvin (50 Celsius) 00:10:37.588 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:37.588 Available Spare: 0% 00:10:37.588 Available Spare Threshold: 0% 00:10:37.588 Life Percentage Used: 0% 00:10:37.588 Data Units Read: 1026 00:10:37.588 Data Units Written: 858 00:10:37.588 Host Read Commands: 48968 00:10:37.588 Host Write Commands: 47451 00:10:37.588 Controller Busy Time: 0 minutes 00:10:37.588 Power Cycles: 0 00:10:37.588 Power On Hours: 0 hours 00:10:37.588 Unsafe Shutdowns: 0 00:10:37.588 Unrecoverable Media Errors: 0 00:10:37.588 Lifetime Error Log Entries: 0 00:10:37.588 Warning Temperature Time: 0 minutes 00:10:37.588 Critical Temperature Time: 0 minutes 00:10:37.588 00:10:37.588 Number of Queues 00:10:37.588 ================ 00:10:37.588 Number of I/O Submission Queues: 64 00:10:37.588 Number of I/O Completion Queues: 64 00:10:37.588 00:10:37.588 ZNS Specific Controller Data 00:10:37.588 ============================ 00:10:37.588 Zone Append Size Limit: 0 00:10:37.588 00:10:37.588 00:10:37.588 Active Namespaces 00:10:37.588 ================= 00:10:37.588 Namespace ID:1 00:10:37.588 Error Recovery Timeout: Unlimited 00:10:37.588 Command Set Identifier: NVM (00h) 00:10:37.588 Deallocate: Supported 00:10:37.588 Deallocated/Unwritten Error: Supported 00:10:37.588 Deallocated Read Value: All 0x00 00:10:37.588 Deallocate in Write Zeroes: Not Supported 00:10:37.588 Deallocated Guard Field: 0xFFFF 00:10:37.588 Flush: Supported 00:10:37.588 Reservation: Not Supported 00:10:37.588 Metadata Transferred as: Separate Metadata Buffer 00:10:37.588 Namespace Sharing Capabilities: Private 00:10:37.588 Size (in LBAs): 1548666 (5GiB) 00:10:37.588 Capacity (in LBAs): 1548666 (5GiB) 00:10:37.588 Utilization (in LBAs): 1548666 (5GiB) 00:10:37.588 Thin Provisioning: Not Supported 00:10:37.588 Per-NS Atomic Units: No 00:10:37.588 Maximum Single Source Range Length: 128 00:10:37.588 Maximum Copy Length: 128 00:10:37.588 Maximum Source Range Count: 128 00:10:37.588 NGUID/EUI64 Never Reused: No 00:10:37.588 Namespace Write Protected: No 00:10:37.588 Number of LBA Formats: 8 00:10:37.588 Current LBA Format: LBA Format #07 00:10:37.588 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:37.588 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:37.588 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:37.588 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:37.588 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:37.588 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:37.588 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:37.588 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:37.588 00:10:37.588 09:59:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:37.588 09:59:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:37.847 ===================================================== 00:10:37.847 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:37.847 ===================================================== 00:10:37.847 Controller Capabilities/Features 00:10:37.847 ================================ 00:10:37.847 Vendor ID: 1b36 00:10:37.847 Subsystem Vendor ID: 1af4 00:10:37.847 Serial Number: 12341 00:10:37.847 Model Number: QEMU NVMe Ctrl 00:10:37.847 Firmware Version: 8.0.0 00:10:37.847 Recommended Arb Burst: 6 00:10:37.847 IEEE OUI Identifier: 00 54 52 00:10:37.847 Multi-path I/O 00:10:37.847 May have multiple subsystem ports: No 00:10:37.847 May have multiple controllers: No 00:10:37.847 Associated with SR-IOV VF: No 00:10:37.847 Max Data Transfer Size: 524288 00:10:37.847 Max Number of Namespaces: 256 00:10:37.847 Max Number of I/O Queues: 64 00:10:37.847 NVMe Specification Version (VS): 1.4 00:10:37.847 NVMe Specification Version (Identify): 1.4 00:10:37.847 Maximum Queue Entries: 2048 00:10:37.847 Contiguous Queues Required: Yes 00:10:37.847 Arbitration Mechanisms Supported 00:10:37.847 Weighted Round Robin: Not Supported 00:10:37.847 Vendor Specific: Not Supported 00:10:37.847 Reset Timeout: 7500 ms 00:10:37.847 Doorbell Stride: 4 bytes 00:10:37.847 NVM Subsystem Reset: Not Supported 00:10:37.847 Command Sets Supported 00:10:37.847 NVM Command Set: Supported 00:10:37.847 Boot Partition: Not Supported 00:10:37.847 Memory Page Size Minimum: 4096 bytes 00:10:37.847 Memory Page Size Maximum: 65536 bytes 00:10:37.847 Persistent Memory Region: Not Supported 00:10:37.847 Optional Asynchronous Events Supported 00:10:37.847 Namespace Attribute Notices: Supported 00:10:37.847 Firmware Activation Notices: Not Supported 00:10:37.847 ANA Change Notices: Not Supported 00:10:37.847 PLE Aggregate Log Change Notices: Not Supported 00:10:37.847 LBA Status Info Alert Notices: Not Supported 00:10:37.847 EGE Aggregate Log Change Notices: Not Supported 00:10:37.847 Normal NVM Subsystem Shutdown event: Not Supported 00:10:37.847 Zone Descriptor Change Notices: Not Supported 00:10:37.847 Discovery Log Change Notices: Not Supported 00:10:37.847 Controller Attributes 00:10:37.847 128-bit Host Identifier: Not Supported 00:10:37.847 Non-Operational Permissive Mode: Not Supported 00:10:37.847 NVM Sets: Not Supported 00:10:37.847 Read Recovery Levels: Not Supported 00:10:37.847 Endurance Groups: Not Supported 00:10:37.847 Predictable Latency Mode: Not Supported 00:10:37.847 Traffic Based Keep ALive: Not Supported 00:10:37.847 Namespace Granularity: Not Supported 00:10:37.847 SQ Associations: Not Supported 00:10:37.847 UUID List: Not Supported 00:10:37.847 Multi-Domain Subsystem: Not Supported 00:10:37.847 Fixed Capacity Management: Not Supported 00:10:37.847 Variable Capacity Management: Not Supported 00:10:37.847 Delete Endurance Group: Not Supported 00:10:37.847 Delete NVM Set: Not Supported 00:10:37.847 Extended LBA Formats Supported: Supported 00:10:37.847 Flexible Data Placement Supported: Not Supported 00:10:37.847 00:10:37.847 Controller Memory Buffer Support 00:10:37.847 ================================ 00:10:37.847 Supported: No 00:10:37.847 00:10:37.847 Persistent Memory Region Support 00:10:37.847 ================================ 00:10:37.847 Supported: No 00:10:37.847 00:10:37.847 Admin Command Set Attributes 00:10:37.847 ============================ 00:10:37.847 Security Send/Receive: Not Supported 00:10:37.847 Format NVM: Supported 00:10:37.847 Firmware Activate/Download: Not Supported 00:10:37.847 Namespace Management: Supported 00:10:37.847 Device Self-Test: Not Supported 00:10:37.847 Directives: Supported 00:10:37.847 NVMe-MI: Not Supported 00:10:37.847 Virtualization Management: Not Supported 00:10:37.847 Doorbell Buffer Config: Supported 00:10:37.847 Get LBA Status Capability: Not Supported 00:10:37.847 Command & Feature Lockdown Capability: Not Supported 00:10:37.847 Abort Command Limit: 4 00:10:37.847 Async Event Request Limit: 4 00:10:37.848 Number of Firmware Slots: N/A 00:10:37.848 Firmware Slot 1 Read-Only: N/A 00:10:37.848 Firmware Activation Without Reset: N/A 00:10:37.848 Multiple Update Detection Support: N/A 00:10:37.848 Firmware Update Granularity: No Information Provided 00:10:37.848 Per-Namespace SMART Log: Yes 00:10:37.848 Asymmetric Namespace Access Log Page: Not Supported 00:10:37.848 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:37.848 Command Effects Log Page: Supported 00:10:37.848 Get Log Page Extended Data: Supported 00:10:37.848 Telemetry Log Pages: Not Supported 00:10:37.848 Persistent Event Log Pages: Not Supported 00:10:37.848 Supported Log Pages Log Page: May Support 00:10:37.848 Commands Supported & Effects Log Page: Not Supported 00:10:37.848 Feature Identifiers & Effects Log Page:May Support 00:10:37.848 NVMe-MI Commands & Effects Log Page: May Support 00:10:37.848 Data Area 4 for Telemetry Log: Not Supported 00:10:37.848 Error Log Page Entries Supported: 1 00:10:37.848 Keep Alive: Not Supported 00:10:37.848 00:10:37.848 NVM Command Set Attributes 00:10:37.848 ========================== 00:10:37.848 Submission Queue Entry Size 00:10:37.848 Max: 64 00:10:37.848 Min: 64 00:10:37.848 Completion Queue Entry Size 00:10:37.848 Max: 16 00:10:37.848 Min: 16 00:10:37.848 Number of Namespaces: 256 00:10:37.848 Compare Command: Supported 00:10:37.848 Write Uncorrectable Command: Not Supported 00:10:37.848 Dataset Management Command: Supported 00:10:37.848 Write Zeroes Command: Supported 00:10:37.848 Set Features Save Field: Supported 00:10:37.848 Reservations: Not Supported 00:10:37.848 Timestamp: Supported 00:10:37.848 Copy: Supported 00:10:37.848 Volatile Write Cache: Present 00:10:37.848 Atomic Write Unit (Normal): 1 00:10:37.848 Atomic Write Unit (PFail): 1 00:10:37.848 Atomic Compare & Write Unit: 1 00:10:37.848 Fused Compare & Write: Not Supported 00:10:37.848 Scatter-Gather List 00:10:37.848 SGL Command Set: Supported 00:10:37.848 SGL Keyed: Not Supported 00:10:37.848 SGL Bit Bucket Descriptor: Not Supported 00:10:37.848 SGL Metadata Pointer: Not Supported 00:10:37.848 Oversized SGL: Not Supported 00:10:37.848 SGL Metadata Address: Not Supported 00:10:37.848 SGL Offset: Not Supported 00:10:37.848 Transport SGL Data Block: Not Supported 00:10:37.848 Replay Protected Memory Block: Not Supported 00:10:37.848 00:10:37.848 Firmware Slot Information 00:10:37.848 ========================= 00:10:37.848 Active slot: 1 00:10:37.848 Slot 1 Firmware Revision: 1.0 00:10:37.848 00:10:37.848 00:10:37.848 Commands Supported and Effects 00:10:37.848 ============================== 00:10:37.848 Admin Commands 00:10:37.848 -------------- 00:10:37.848 Delete I/O Submission Queue (00h): Supported 00:10:37.848 Create I/O Submission Queue (01h): Supported 00:10:37.848 Get Log Page (02h): Supported 00:10:37.848 Delete I/O Completion Queue (04h): Supported 00:10:37.848 Create I/O Completion Queue (05h): Supported 00:10:37.848 Identify (06h): Supported 00:10:37.848 Abort (08h): Supported 00:10:37.848 Set Features (09h): Supported 00:10:37.848 Get Features (0Ah): Supported 00:10:37.848 Asynchronous Event Request (0Ch): Supported 00:10:37.848 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:37.848 Directive Send (19h): Supported 00:10:37.848 Directive Receive (1Ah): Supported 00:10:37.848 Virtualization Management (1Ch): Supported 00:10:37.848 Doorbell Buffer Config (7Ch): Supported 00:10:37.848 Format NVM (80h): Supported LBA-Change 00:10:37.848 I/O Commands 00:10:37.848 ------------ 00:10:37.848 Flush (00h): Supported LBA-Change 00:10:37.848 Write (01h): Supported LBA-Change 00:10:37.848 Read (02h): Supported 00:10:37.848 Compare (05h): Supported 00:10:37.848 Write Zeroes (08h): Supported LBA-Change 00:10:37.848 Dataset Management (09h): Supported LBA-Change 00:10:37.848 Unknown (0Ch): Supported 00:10:37.848 Unknown (12h): Supported 00:10:37.848 Copy (19h): Supported LBA-Change 00:10:37.848 Unknown (1Dh): Supported LBA-Change 00:10:37.848 00:10:37.848 Error Log 00:10:37.848 ========= 00:10:37.848 00:10:37.848 Arbitration 00:10:37.848 =========== 00:10:37.848 Arbitration Burst: no limit 00:10:37.848 00:10:37.848 Power Management 00:10:37.848 ================ 00:10:37.848 Number of Power States: 1 00:10:37.848 Current Power State: Power State #0 00:10:37.848 Power State #0: 00:10:37.848 Max Power: 25.00 W 00:10:37.848 Non-Operational State: Operational 00:10:37.848 Entry Latency: 16 microseconds 00:10:37.848 Exit Latency: 4 microseconds 00:10:37.848 Relative Read Throughput: 0 00:10:37.848 Relative Read Latency: 0 00:10:37.848 Relative Write Throughput: 0 00:10:37.848 Relative Write Latency: 0 00:10:37.848 Idle Power: Not Reported 00:10:37.848 Active Power: Not Reported 00:10:37.848 Non-Operational Permissive Mode: Not Supported 00:10:37.848 00:10:37.848 Health Information 00:10:37.848 ================== 00:10:37.848 Critical Warnings: 00:10:37.848 Available Spare Space: OK 00:10:37.848 Temperature: OK 00:10:37.848 Device Reliability: OK 00:10:37.848 Read Only: No 00:10:37.848 Volatile Memory Backup: OK 00:10:37.848 Current Temperature: 323 Kelvin (50 Celsius) 00:10:37.848 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:37.848 Available Spare: 0% 00:10:37.848 Available Spare Threshold: 0% 00:10:37.848 Life Percentage Used: 0% 00:10:37.848 Data Units Read: 731 00:10:37.848 Data Units Written: 580 00:10:37.848 Host Read Commands: 34515 00:10:37.848 Host Write Commands: 32242 00:10:37.848 Controller Busy Time: 0 minutes 00:10:37.848 Power Cycles: 0 00:10:37.848 Power On Hours: 0 hours 00:10:37.848 Unsafe Shutdowns: 0 00:10:37.848 Unrecoverable Media Errors: 0 00:10:37.848 Lifetime Error Log Entries: 0 00:10:37.848 Warning Temperature Time: 0 minutes 00:10:37.848 Critical Temperature Time: 0 minutes 00:10:37.848 00:10:37.848 Number of Queues 00:10:37.848 ================ 00:10:37.848 Number of I/O Submission Queues: 64 00:10:37.848 Number of I/O Completion Queues: 64 00:10:37.848 00:10:37.848 ZNS Specific Controller Data 00:10:37.848 ============================ 00:10:37.848 Zone Append Size Limit: 0 00:10:37.848 00:10:37.848 00:10:37.848 Active Namespaces 00:10:37.848 ================= 00:10:37.848 Namespace ID:1 00:10:37.848 Error Recovery Timeout: Unlimited 00:10:37.848 Command Set Identifier: NVM (00h) 00:10:37.848 Deallocate: Supported 00:10:37.848 Deallocated/Unwritten Error: Supported 00:10:37.848 Deallocated Read Value: All 0x00 00:10:37.848 Deallocate in Write Zeroes: Not Supported 00:10:37.848 Deallocated Guard Field: 0xFFFF 00:10:37.848 Flush: Supported 00:10:37.848 Reservation: Not Supported 00:10:37.848 Namespace Sharing Capabilities: Private 00:10:37.848 Size (in LBAs): 1310720 (5GiB) 00:10:37.848 Capacity (in LBAs): 1310720 (5GiB) 00:10:37.848 Utilization (in LBAs): 1310720 (5GiB) 00:10:37.848 Thin Provisioning: Not Supported 00:10:37.848 Per-NS Atomic Units: No 00:10:37.848 Maximum Single Source Range Length: 128 00:10:37.848 Maximum Copy Length: 128 00:10:37.848 Maximum Source Range Count: 128 00:10:37.848 NGUID/EUI64 Never Reused: No 00:10:37.848 Namespace Write Protected: No 00:10:37.848 Number of LBA Formats: 8 00:10:37.848 Current LBA Format: LBA Format #04 00:10:37.848 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:37.848 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:37.848 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:37.848 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:37.848 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:37.848 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:37.848 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:37.848 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:37.848 00:10:37.848 09:59:27 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:37.848 09:59:27 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:38.108 ===================================================== 00:10:38.108 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:38.108 ===================================================== 00:10:38.108 Controller Capabilities/Features 00:10:38.108 ================================ 00:10:38.108 Vendor ID: 1b36 00:10:38.108 Subsystem Vendor ID: 1af4 00:10:38.108 Serial Number: 12342 00:10:38.108 Model Number: QEMU NVMe Ctrl 00:10:38.108 Firmware Version: 8.0.0 00:10:38.108 Recommended Arb Burst: 6 00:10:38.108 IEEE OUI Identifier: 00 54 52 00:10:38.108 Multi-path I/O 00:10:38.108 May have multiple subsystem ports: No 00:10:38.108 May have multiple controllers: No 00:10:38.108 Associated with SR-IOV VF: No 00:10:38.108 Max Data Transfer Size: 524288 00:10:38.108 Max Number of Namespaces: 256 00:10:38.108 Max Number of I/O Queues: 64 00:10:38.108 NVMe Specification Version (VS): 1.4 00:10:38.108 NVMe Specification Version (Identify): 1.4 00:10:38.108 Maximum Queue Entries: 2048 00:10:38.108 Contiguous Queues Required: Yes 00:10:38.108 Arbitration Mechanisms Supported 00:10:38.108 Weighted Round Robin: Not Supported 00:10:38.108 Vendor Specific: Not Supported 00:10:38.108 Reset Timeout: 7500 ms 00:10:38.108 Doorbell Stride: 4 bytes 00:10:38.108 NVM Subsystem Reset: Not Supported 00:10:38.108 Command Sets Supported 00:10:38.108 NVM Command Set: Supported 00:10:38.108 Boot Partition: Not Supported 00:10:38.108 Memory Page Size Minimum: 4096 bytes 00:10:38.108 Memory Page Size Maximum: 65536 bytes 00:10:38.108 Persistent Memory Region: Not Supported 00:10:38.108 Optional Asynchronous Events Supported 00:10:38.108 Namespace Attribute Notices: Supported 00:10:38.108 Firmware Activation Notices: Not Supported 00:10:38.108 ANA Change Notices: Not Supported 00:10:38.108 PLE Aggregate Log Change Notices: Not Supported 00:10:38.108 LBA Status Info Alert Notices: Not Supported 00:10:38.108 EGE Aggregate Log Change Notices: Not Supported 00:10:38.108 Normal NVM Subsystem Shutdown event: Not Supported 00:10:38.108 Zone Descriptor Change Notices: Not Supported 00:10:38.108 Discovery Log Change Notices: Not Supported 00:10:38.108 Controller Attributes 00:10:38.108 128-bit Host Identifier: Not Supported 00:10:38.108 Non-Operational Permissive Mode: Not Supported 00:10:38.108 NVM Sets: Not Supported 00:10:38.108 Read Recovery Levels: Not Supported 00:10:38.108 Endurance Groups: Not Supported 00:10:38.108 Predictable Latency Mode: Not Supported 00:10:38.108 Traffic Based Keep ALive: Not Supported 00:10:38.108 Namespace Granularity: Not Supported 00:10:38.108 SQ Associations: Not Supported 00:10:38.108 UUID List: Not Supported 00:10:38.108 Multi-Domain Subsystem: Not Supported 00:10:38.108 Fixed Capacity Management: Not Supported 00:10:38.108 Variable Capacity Management: Not Supported 00:10:38.108 Delete Endurance Group: Not Supported 00:10:38.108 Delete NVM Set: Not Supported 00:10:38.108 Extended LBA Formats Supported: Supported 00:10:38.108 Flexible Data Placement Supported: Not Supported 00:10:38.108 00:10:38.108 Controller Memory Buffer Support 00:10:38.108 ================================ 00:10:38.108 Supported: No 00:10:38.108 00:10:38.108 Persistent Memory Region Support 00:10:38.108 ================================ 00:10:38.108 Supported: No 00:10:38.108 00:10:38.108 Admin Command Set Attributes 00:10:38.108 ============================ 00:10:38.108 Security Send/Receive: Not Supported 00:10:38.108 Format NVM: Supported 00:10:38.108 Firmware Activate/Download: Not Supported 00:10:38.108 Namespace Management: Supported 00:10:38.108 Device Self-Test: Not Supported 00:10:38.108 Directives: Supported 00:10:38.108 NVMe-MI: Not Supported 00:10:38.108 Virtualization Management: Not Supported 00:10:38.108 Doorbell Buffer Config: Supported 00:10:38.108 Get LBA Status Capability: Not Supported 00:10:38.108 Command & Feature Lockdown Capability: Not Supported 00:10:38.108 Abort Command Limit: 4 00:10:38.108 Async Event Request Limit: 4 00:10:38.108 Number of Firmware Slots: N/A 00:10:38.108 Firmware Slot 1 Read-Only: N/A 00:10:38.108 Firmware Activation Without Reset: N/A 00:10:38.108 Multiple Update Detection Support: N/A 00:10:38.108 Firmware Update Granularity: No Information Provided 00:10:38.108 Per-Namespace SMART Log: Yes 00:10:38.108 Asymmetric Namespace Access Log Page: Not Supported 00:10:38.108 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:38.108 Command Effects Log Page: Supported 00:10:38.108 Get Log Page Extended Data: Supported 00:10:38.108 Telemetry Log Pages: Not Supported 00:10:38.108 Persistent Event Log Pages: Not Supported 00:10:38.108 Supported Log Pages Log Page: May Support 00:10:38.108 Commands Supported & Effects Log Page: Not Supported 00:10:38.108 Feature Identifiers & Effects Log Page:May Support 00:10:38.108 NVMe-MI Commands & Effects Log Page: May Support 00:10:38.108 Data Area 4 for Telemetry Log: Not Supported 00:10:38.108 Error Log Page Entries Supported: 1 00:10:38.108 Keep Alive: Not Supported 00:10:38.108 00:10:38.108 NVM Command Set Attributes 00:10:38.108 ========================== 00:10:38.108 Submission Queue Entry Size 00:10:38.108 Max: 64 00:10:38.108 Min: 64 00:10:38.108 Completion Queue Entry Size 00:10:38.108 Max: 16 00:10:38.108 Min: 16 00:10:38.108 Number of Namespaces: 256 00:10:38.108 Compare Command: Supported 00:10:38.108 Write Uncorrectable Command: Not Supported 00:10:38.108 Dataset Management Command: Supported 00:10:38.108 Write Zeroes Command: Supported 00:10:38.108 Set Features Save Field: Supported 00:10:38.108 Reservations: Not Supported 00:10:38.108 Timestamp: Supported 00:10:38.108 Copy: Supported 00:10:38.108 Volatile Write Cache: Present 00:10:38.108 Atomic Write Unit (Normal): 1 00:10:38.108 Atomic Write Unit (PFail): 1 00:10:38.108 Atomic Compare & Write Unit: 1 00:10:38.108 Fused Compare & Write: Not Supported 00:10:38.108 Scatter-Gather List 00:10:38.108 SGL Command Set: Supported 00:10:38.108 SGL Keyed: Not Supported 00:10:38.108 SGL Bit Bucket Descriptor: Not Supported 00:10:38.108 SGL Metadata Pointer: Not Supported 00:10:38.108 Oversized SGL: Not Supported 00:10:38.108 SGL Metadata Address: Not Supported 00:10:38.108 SGL Offset: Not Supported 00:10:38.108 Transport SGL Data Block: Not Supported 00:10:38.108 Replay Protected Memory Block: Not Supported 00:10:38.108 00:10:38.108 Firmware Slot Information 00:10:38.108 ========================= 00:10:38.108 Active slot: 1 00:10:38.108 Slot 1 Firmware Revision: 1.0 00:10:38.108 00:10:38.108 00:10:38.108 Commands Supported and Effects 00:10:38.108 ============================== 00:10:38.108 Admin Commands 00:10:38.108 -------------- 00:10:38.108 Delete I/O Submission Queue (00h): Supported 00:10:38.108 Create I/O Submission Queue (01h): Supported 00:10:38.108 Get Log Page (02h): Supported 00:10:38.108 Delete I/O Completion Queue (04h): Supported 00:10:38.108 Create I/O Completion Queue (05h): Supported 00:10:38.108 Identify (06h): Supported 00:10:38.108 Abort (08h): Supported 00:10:38.108 Set Features (09h): Supported 00:10:38.108 Get Features (0Ah): Supported 00:10:38.108 Asynchronous Event Request (0Ch): Supported 00:10:38.108 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:38.108 Directive Send (19h): Supported 00:10:38.108 Directive Receive (1Ah): Supported 00:10:38.108 Virtualization Management (1Ch): Supported 00:10:38.108 Doorbell Buffer Config (7Ch): Supported 00:10:38.108 Format NVM (80h): Supported LBA-Change 00:10:38.108 I/O Commands 00:10:38.108 ------------ 00:10:38.108 Flush (00h): Supported LBA-Change 00:10:38.108 Write (01h): Supported LBA-Change 00:10:38.108 Read (02h): Supported 00:10:38.108 Compare (05h): Supported 00:10:38.108 Write Zeroes (08h): Supported LBA-Change 00:10:38.109 Dataset Management (09h): Supported LBA-Change 00:10:38.109 Unknown (0Ch): Supported 00:10:38.109 Unknown (12h): Supported 00:10:38.109 Copy (19h): Supported LBA-Change 00:10:38.109 Unknown (1Dh): Supported LBA-Change 00:10:38.109 00:10:38.109 Error Log 00:10:38.109 ========= 00:10:38.109 00:10:38.109 Arbitration 00:10:38.109 =========== 00:10:38.109 Arbitration Burst: no limit 00:10:38.109 00:10:38.109 Power Management 00:10:38.109 ================ 00:10:38.109 Number of Power States: 1 00:10:38.109 Current Power State: Power State #0 00:10:38.109 Power State #0: 00:10:38.109 Max Power: 25.00 W 00:10:38.109 Non-Operational State: Operational 00:10:38.109 Entry Latency: 16 microseconds 00:10:38.109 Exit Latency: 4 microseconds 00:10:38.109 Relative Read Throughput: 0 00:10:38.109 Relative Read Latency: 0 00:10:38.109 Relative Write Throughput: 0 00:10:38.109 Relative Write Latency: 0 00:10:38.109 Idle Power: Not Reported 00:10:38.109 Active Power: Not Reported 00:10:38.109 Non-Operational Permissive Mode: Not Supported 00:10:38.109 00:10:38.109 Health Information 00:10:38.109 ================== 00:10:38.109 Critical Warnings: 00:10:38.109 Available Spare Space: OK 00:10:38.109 Temperature: OK 00:10:38.109 Device Reliability: OK 00:10:38.109 Read Only: No 00:10:38.109 Volatile Memory Backup: OK 00:10:38.109 Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.109 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:38.109 Available Spare: 0% 00:10:38.109 Available Spare Threshold: 0% 00:10:38.109 Life Percentage Used: 0% 00:10:38.109 Data Units Read: 2194 00:10:38.109 Data Units Written: 1875 00:10:38.109 Host Read Commands: 102148 00:10:38.109 Host Write Commands: 97918 00:10:38.109 Controller Busy Time: 0 minutes 00:10:38.109 Power Cycles: 0 00:10:38.109 Power On Hours: 0 hours 00:10:38.109 Unsafe Shutdowns: 0 00:10:38.109 Unrecoverable Media Errors: 0 00:10:38.109 Lifetime Error Log Entries: 0 00:10:38.109 Warning Temperature Time: 0 minutes 00:10:38.109 Critical Temperature Time: 0 minutes 00:10:38.109 00:10:38.109 Number of Queues 00:10:38.109 ================ 00:10:38.109 Number of I/O Submission Queues: 64 00:10:38.109 Number of I/O Completion Queues: 64 00:10:38.109 00:10:38.109 ZNS Specific Controller Data 00:10:38.109 ============================ 00:10:38.109 Zone Append Size Limit: 0 00:10:38.109 00:10:38.109 00:10:38.109 Active Namespaces 00:10:38.109 ================= 00:10:38.109 Namespace ID:1 00:10:38.109 Error Recovery Timeout: Unlimited 00:10:38.109 Command Set Identifier: NVM (00h) 00:10:38.109 Deallocate: Supported 00:10:38.109 Deallocated/Unwritten Error: Supported 00:10:38.109 Deallocated Read Value: All 0x00 00:10:38.109 Deallocate in Write Zeroes: Not Supported 00:10:38.109 Deallocated Guard Field: 0xFFFF 00:10:38.109 Flush: Supported 00:10:38.109 Reservation: Not Supported 00:10:38.109 Namespace Sharing Capabilities: Private 00:10:38.109 Size (in LBAs): 1048576 (4GiB) 00:10:38.109 Capacity (in LBAs): 1048576 (4GiB) 00:10:38.109 Utilization (in LBAs): 1048576 (4GiB) 00:10:38.109 Thin Provisioning: Not Supported 00:10:38.109 Per-NS Atomic Units: No 00:10:38.109 Maximum Single Source Range Length: 128 00:10:38.109 Maximum Copy Length: 128 00:10:38.109 Maximum Source Range Count: 128 00:10:38.109 NGUID/EUI64 Never Reused: No 00:10:38.109 Namespace Write Protected: No 00:10:38.109 Number of LBA Formats: 8 00:10:38.109 Current LBA Format: LBA Format #04 00:10:38.109 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:38.109 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:38.109 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:38.109 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:38.109 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:38.109 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:38.109 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:38.109 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:38.109 00:10:38.109 Namespace ID:2 00:10:38.109 Error Recovery Timeout: Unlimited 00:10:38.109 Command Set Identifier: NVM (00h) 00:10:38.109 Deallocate: Supported 00:10:38.109 Deallocated/Unwritten Error: Supported 00:10:38.109 Deallocated Read Value: All 0x00 00:10:38.109 Deallocate in Write Zeroes: Not Supported 00:10:38.109 Deallocated Guard Field: 0xFFFF 00:10:38.109 Flush: Supported 00:10:38.109 Reservation: Not Supported 00:10:38.109 Namespace Sharing Capabilities: Private 00:10:38.109 Size (in LBAs): 1048576 (4GiB) 00:10:38.109 Capacity (in LBAs): 1048576 (4GiB) 00:10:38.109 Utilization (in LBAs): 1048576 (4GiB) 00:10:38.109 Thin Provisioning: Not Supported 00:10:38.109 Per-NS Atomic Units: No 00:10:38.109 Maximum Single Source Range Length: 128 00:10:38.109 Maximum Copy Length: 128 00:10:38.109 Maximum Source Range Count: 128 00:10:38.109 NGUID/EUI64 Never Reused: No 00:10:38.109 Namespace Write Protected: No 00:10:38.109 Number of LBA Formats: 8 00:10:38.109 Current LBA Format: LBA Format #04 00:10:38.109 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:38.109 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:38.109 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:38.109 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:38.109 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:38.109 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:38.109 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:38.109 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:38.109 00:10:38.109 Namespace ID:3 00:10:38.109 Error Recovery Timeout: Unlimited 00:10:38.109 Command Set Identifier: NVM (00h) 00:10:38.109 Deallocate: Supported 00:10:38.109 Deallocated/Unwritten Error: Supported 00:10:38.109 Deallocated Read Value: All 0x00 00:10:38.109 Deallocate in Write Zeroes: Not Supported 00:10:38.109 Deallocated Guard Field: 0xFFFF 00:10:38.109 Flush: Supported 00:10:38.109 Reservation: Not Supported 00:10:38.109 Namespace Sharing Capabilities: Private 00:10:38.109 Size (in LBAs): 1048576 (4GiB) 00:10:38.109 Capacity (in LBAs): 1048576 (4GiB) 00:10:38.109 Utilization (in LBAs): 1048576 (4GiB) 00:10:38.109 Thin Provisioning: Not Supported 00:10:38.109 Per-NS Atomic Units: No 00:10:38.109 Maximum Single Source Range Length: 128 00:10:38.109 Maximum Copy Length: 128 00:10:38.109 Maximum Source Range Count: 128 00:10:38.109 NGUID/EUI64 Never Reused: No 00:10:38.109 Namespace Write Protected: No 00:10:38.109 Number of LBA Formats: 8 00:10:38.109 Current LBA Format: LBA Format #04 00:10:38.109 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:38.109 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:38.109 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:38.109 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:38.109 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:38.109 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:38.109 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:38.109 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:38.109 00:10:38.109 09:59:27 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:38.109 09:59:27 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:38.368 ===================================================== 00:10:38.368 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:38.368 ===================================================== 00:10:38.368 Controller Capabilities/Features 00:10:38.368 ================================ 00:10:38.368 Vendor ID: 1b36 00:10:38.368 Subsystem Vendor ID: 1af4 00:10:38.368 Serial Number: 12343 00:10:38.368 Model Number: QEMU NVMe Ctrl 00:10:38.368 Firmware Version: 8.0.0 00:10:38.368 Recommended Arb Burst: 6 00:10:38.368 IEEE OUI Identifier: 00 54 52 00:10:38.368 Multi-path I/O 00:10:38.368 May have multiple subsystem ports: No 00:10:38.368 May have multiple controllers: Yes 00:10:38.368 Associated with SR-IOV VF: No 00:10:38.368 Max Data Transfer Size: 524288 00:10:38.368 Max Number of Namespaces: 256 00:10:38.368 Max Number of I/O Queues: 64 00:10:38.368 NVMe Specification Version (VS): 1.4 00:10:38.368 NVMe Specification Version (Identify): 1.4 00:10:38.368 Maximum Queue Entries: 2048 00:10:38.368 Contiguous Queues Required: Yes 00:10:38.368 Arbitration Mechanisms Supported 00:10:38.368 Weighted Round Robin: Not Supported 00:10:38.368 Vendor Specific: Not Supported 00:10:38.368 Reset Timeout: 7500 ms 00:10:38.368 Doorbell Stride: 4 bytes 00:10:38.368 NVM Subsystem Reset: Not Supported 00:10:38.368 Command Sets Supported 00:10:38.368 NVM Command Set: Supported 00:10:38.368 Boot Partition: Not Supported 00:10:38.368 Memory Page Size Minimum: 4096 bytes 00:10:38.368 Memory Page Size Maximum: 65536 bytes 00:10:38.368 Persistent Memory Region: Not Supported 00:10:38.368 Optional Asynchronous Events Supported 00:10:38.368 Namespace Attribute Notices: Supported 00:10:38.368 Firmware Activation Notices: Not Supported 00:10:38.368 ANA Change Notices: Not Supported 00:10:38.368 PLE Aggregate Log Change Notices: Not Supported 00:10:38.368 LBA Status Info Alert Notices: Not Supported 00:10:38.368 EGE Aggregate Log Change Notices: Not Supported 00:10:38.368 Normal NVM Subsystem Shutdown event: Not Supported 00:10:38.368 Zone Descriptor Change Notices: Not Supported 00:10:38.368 Discovery Log Change Notices: Not Supported 00:10:38.368 Controller Attributes 00:10:38.368 128-bit Host Identifier: Not Supported 00:10:38.368 Non-Operational Permissive Mode: Not Supported 00:10:38.368 NVM Sets: Not Supported 00:10:38.368 Read Recovery Levels: Not Supported 00:10:38.368 Endurance Groups: Supported 00:10:38.368 Predictable Latency Mode: Not Supported 00:10:38.369 Traffic Based Keep ALive: Not Supported 00:10:38.369 Namespace Granularity: Not Supported 00:10:38.369 SQ Associations: Not Supported 00:10:38.369 UUID List: Not Supported 00:10:38.369 Multi-Domain Subsystem: Not Supported 00:10:38.369 Fixed Capacity Management: Not Supported 00:10:38.369 Variable Capacity Management: Not Supported 00:10:38.369 Delete Endurance Group: Not Supported 00:10:38.369 Delete NVM Set: Not Supported 00:10:38.369 Extended LBA Formats Supported: Supported 00:10:38.369 Flexible Data Placement Supported: Supported 00:10:38.369 00:10:38.369 Controller Memory Buffer Support 00:10:38.369 ================================ 00:10:38.369 Supported: No 00:10:38.369 00:10:38.369 Persistent Memory Region Support 00:10:38.369 ================================ 00:10:38.369 Supported: No 00:10:38.369 00:10:38.369 Admin Command Set Attributes 00:10:38.369 ============================ 00:10:38.369 Security Send/Receive: Not Supported 00:10:38.369 Format NVM: Supported 00:10:38.369 Firmware Activate/Download: Not Supported 00:10:38.369 Namespace Management: Supported 00:10:38.369 Device Self-Test: Not Supported 00:10:38.369 Directives: Supported 00:10:38.369 NVMe-MI: Not Supported 00:10:38.369 Virtualization Management: Not Supported 00:10:38.369 Doorbell Buffer Config: Supported 00:10:38.369 Get LBA Status Capability: Not Supported 00:10:38.369 Command & Feature Lockdown Capability: Not Supported 00:10:38.369 Abort Command Limit: 4 00:10:38.369 Async Event Request Limit: 4 00:10:38.369 Number of Firmware Slots: N/A 00:10:38.369 Firmware Slot 1 Read-Only: N/A 00:10:38.369 Firmware Activation Without Reset: N/A 00:10:38.369 Multiple Update Detection Support: N/A 00:10:38.369 Firmware Update Granularity: No Information Provided 00:10:38.369 Per-Namespace SMART Log: Yes 00:10:38.369 Asymmetric Namespace Access Log Page: Not Supported 00:10:38.369 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:38.369 Command Effects Log Page: Supported 00:10:38.369 Get Log Page Extended Data: Supported 00:10:38.369 Telemetry Log Pages: Not Supported 00:10:38.369 Persistent Event Log Pages: Not Supported 00:10:38.369 Supported Log Pages Log Page: May Support 00:10:38.369 Commands Supported & Effects Log Page: Not Supported 00:10:38.369 Feature Identifiers & Effects Log Page:May Support 00:10:38.369 NVMe-MI Commands & Effects Log Page: May Support 00:10:38.369 Data Area 4 for Telemetry Log: Not Supported 00:10:38.369 Error Log Page Entries Supported: 1 00:10:38.369 Keep Alive: Not Supported 00:10:38.369 00:10:38.369 NVM Command Set Attributes 00:10:38.369 ========================== 00:10:38.369 Submission Queue Entry Size 00:10:38.369 Max: 64 00:10:38.369 Min: 64 00:10:38.369 Completion Queue Entry Size 00:10:38.369 Max: 16 00:10:38.369 Min: 16 00:10:38.369 Number of Namespaces: 256 00:10:38.369 Compare Command: Supported 00:10:38.369 Write Uncorrectable Command: Not Supported 00:10:38.369 Dataset Management Command: Supported 00:10:38.369 Write Zeroes Command: Supported 00:10:38.369 Set Features Save Field: Supported 00:10:38.369 Reservations: Not Supported 00:10:38.369 Timestamp: Supported 00:10:38.369 Copy: Supported 00:10:38.369 Volatile Write Cache: Present 00:10:38.369 Atomic Write Unit (Normal): 1 00:10:38.369 Atomic Write Unit (PFail): 1 00:10:38.369 Atomic Compare & Write Unit: 1 00:10:38.369 Fused Compare & Write: Not Supported 00:10:38.369 Scatter-Gather List 00:10:38.369 SGL Command Set: Supported 00:10:38.369 SGL Keyed: Not Supported 00:10:38.369 SGL Bit Bucket Descriptor: Not Supported 00:10:38.369 SGL Metadata Pointer: Not Supported 00:10:38.369 Oversized SGL: Not Supported 00:10:38.369 SGL Metadata Address: Not Supported 00:10:38.369 SGL Offset: Not Supported 00:10:38.369 Transport SGL Data Block: Not Supported 00:10:38.369 Replay Protected Memory Block: Not Supported 00:10:38.369 00:10:38.369 Firmware Slot Information 00:10:38.369 ========================= 00:10:38.369 Active slot: 1 00:10:38.369 Slot 1 Firmware Revision: 1.0 00:10:38.369 00:10:38.369 00:10:38.369 Commands Supported and Effects 00:10:38.369 ============================== 00:10:38.369 Admin Commands 00:10:38.369 -------------- 00:10:38.369 Delete I/O Submission Queue (00h): Supported 00:10:38.369 Create I/O Submission Queue (01h): Supported 00:10:38.369 Get Log Page (02h): Supported 00:10:38.369 Delete I/O Completion Queue (04h): Supported 00:10:38.369 Create I/O Completion Queue (05h): Supported 00:10:38.369 Identify (06h): Supported 00:10:38.369 Abort (08h): Supported 00:10:38.369 Set Features (09h): Supported 00:10:38.369 Get Features (0Ah): Supported 00:10:38.369 Asynchronous Event Request (0Ch): Supported 00:10:38.369 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:38.369 Directive Send (19h): Supported 00:10:38.369 Directive Receive (1Ah): Supported 00:10:38.369 Virtualization Management (1Ch): Supported 00:10:38.369 Doorbell Buffer Config (7Ch): Supported 00:10:38.369 Format NVM (80h): Supported LBA-Change 00:10:38.369 I/O Commands 00:10:38.369 ------------ 00:10:38.369 Flush (00h): Supported LBA-Change 00:10:38.369 Write (01h): Supported LBA-Change 00:10:38.369 Read (02h): Supported 00:10:38.369 Compare (05h): Supported 00:10:38.369 Write Zeroes (08h): Supported LBA-Change 00:10:38.369 Dataset Management (09h): Supported LBA-Change 00:10:38.369 Unknown (0Ch): Supported 00:10:38.369 Unknown (12h): Supported 00:10:38.369 Copy (19h): Supported LBA-Change 00:10:38.369 Unknown (1Dh): Supported LBA-Change 00:10:38.369 00:10:38.369 Error Log 00:10:38.369 ========= 00:10:38.369 00:10:38.369 Arbitration 00:10:38.369 =========== 00:10:38.369 Arbitration Burst: no limit 00:10:38.369 00:10:38.369 Power Management 00:10:38.369 ================ 00:10:38.369 Number of Power States: 1 00:10:38.369 Current Power State: Power State #0 00:10:38.369 Power State #0: 00:10:38.369 Max Power: 25.00 W 00:10:38.369 Non-Operational State: Operational 00:10:38.369 Entry Latency: 16 microseconds 00:10:38.369 Exit Latency: 4 microseconds 00:10:38.369 Relative Read Throughput: 0 00:10:38.369 Relative Read Latency: 0 00:10:38.369 Relative Write Throughput: 0 00:10:38.369 Relative Write Latency: 0 00:10:38.369 Idle Power: Not Reported 00:10:38.369 Active Power: Not Reported 00:10:38.369 Non-Operational Permissive Mode: Not Supported 00:10:38.369 00:10:38.369 Health Information 00:10:38.369 ================== 00:10:38.369 Critical Warnings: 00:10:38.369 Available Spare Space: OK 00:10:38.369 Temperature: OK 00:10:38.369 Device Reliability: OK 00:10:38.369 Read Only: No 00:10:38.369 Volatile Memory Backup: OK 00:10:38.369 Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.369 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:38.369 Available Spare: 0% 00:10:38.369 Available Spare Threshold: 0% 00:10:38.369 Life Percentage Used: 0% 00:10:38.369 Data Units Read: 796 00:10:38.369 Data Units Written: 689 00:10:38.369 Host Read Commands: 34630 00:10:38.369 Host Write Commands: 33220 00:10:38.369 Controller Busy Time: 0 minutes 00:10:38.369 Power Cycles: 0 00:10:38.369 Power On Hours: 0 hours 00:10:38.369 Unsafe Shutdowns: 0 00:10:38.369 Unrecoverable Media Errors: 0 00:10:38.369 Lifetime Error Log Entries: 0 00:10:38.369 Warning Temperature Time: 0 minutes 00:10:38.369 Critical Temperature Time: 0 minutes 00:10:38.369 00:10:38.369 Number of Queues 00:10:38.369 ================ 00:10:38.369 Number of I/O Submission Queues: 64 00:10:38.369 Number of I/O Completion Queues: 64 00:10:38.369 00:10:38.369 ZNS Specific Controller Data 00:10:38.369 ============================ 00:10:38.369 Zone Append Size Limit: 0 00:10:38.369 00:10:38.369 00:10:38.369 Active Namespaces 00:10:38.369 ================= 00:10:38.369 Namespace ID:1 00:10:38.369 Error Recovery Timeout: Unlimited 00:10:38.369 Command Set Identifier: NVM (00h) 00:10:38.369 Deallocate: Supported 00:10:38.369 Deallocated/Unwritten Error: Supported 00:10:38.369 Deallocated Read Value: All 0x00 00:10:38.369 Deallocate in Write Zeroes: Not Supported 00:10:38.369 Deallocated Guard Field: 0xFFFF 00:10:38.369 Flush: Supported 00:10:38.369 Reservation: Not Supported 00:10:38.369 Namespace Sharing Capabilities: Multiple Controllers 00:10:38.369 Size (in LBAs): 262144 (1GiB) 00:10:38.369 Capacity (in LBAs): 262144 (1GiB) 00:10:38.369 Utilization (in LBAs): 262144 (1GiB) 00:10:38.369 Thin Provisioning: Not Supported 00:10:38.369 Per-NS Atomic Units: No 00:10:38.369 Maximum Single Source Range Length: 128 00:10:38.369 Maximum Copy Length: 128 00:10:38.369 Maximum Source Range Count: 128 00:10:38.369 NGUID/EUI64 Never Reused: No 00:10:38.369 Namespace Write Protected: No 00:10:38.369 Endurance group ID: 1 00:10:38.369 Number of LBA Formats: 8 00:10:38.369 Current LBA Format: LBA Format #04 00:10:38.370 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:38.370 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:38.370 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:38.370 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:38.370 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:38.370 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:38.370 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:38.370 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:38.370 00:10:38.370 Get Feature FDP: 00:10:38.370 ================ 00:10:38.370 Enabled: Yes 00:10:38.370 FDP configuration index: 0 00:10:38.370 00:10:38.370 FDP configurations log page 00:10:38.370 =========================== 00:10:38.370 Number of FDP configurations: 1 00:10:38.370 Version: 0 00:10:38.370 Size: 112 00:10:38.370 FDP Configuration Descriptor: 0 00:10:38.370 Descriptor Size: 96 00:10:38.370 Reclaim Group Identifier format: 2 00:10:38.370 FDP Volatile Write Cache: Not Present 00:10:38.370 FDP Configuration: Valid 00:10:38.370 Vendor Specific Size: 0 00:10:38.370 Number of Reclaim Groups: 2 00:10:38.370 Number of Recalim Unit Handles: 8 00:10:38.370 Max Placement Identifiers: 128 00:10:38.370 Number of Namespaces Suppprted: 256 00:10:38.370 Reclaim unit Nominal Size: 6000000 bytes 00:10:38.370 Estimated Reclaim Unit Time Limit: Not Reported 00:10:38.370 RUH Desc #000: RUH Type: Initially Isolated 00:10:38.370 RUH Desc #001: RUH Type: Initially Isolated 00:10:38.370 RUH Desc #002: RUH Type: Initially Isolated 00:10:38.370 RUH Desc #003: RUH Type: Initially Isolated 00:10:38.370 RUH Desc #004: RUH Type: Initially Isolated 00:10:38.370 RUH Desc #005: RUH Type: Initially Isolated 00:10:38.370 RUH Desc #006: RUH Type: Initially Isolated 00:10:38.370 RUH Desc #007: RUH Type: Initially Isolated 00:10:38.370 00:10:38.370 FDP reclaim unit handle usage log page 00:10:38.370 ====================================== 00:10:38.370 Number of Reclaim Unit Handles: 8 00:10:38.370 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:38.370 RUH Usage Desc #001: RUH Attributes: Unused 00:10:38.370 RUH Usage Desc #002: RUH Attributes: Unused 00:10:38.370 RUH Usage Desc #003: RUH Attributes: Unused 00:10:38.370 RUH Usage Desc #004: RUH Attributes: Unused 00:10:38.370 RUH Usage Desc #005: RUH Attributes: Unused 00:10:38.370 RUH Usage Desc #006: RUH Attributes: Unused 00:10:38.370 RUH Usage Desc #007: RUH Attributes: Unused 00:10:38.370 00:10:38.370 FDP statistics log page 00:10:38.370 ======================= 00:10:38.370 Host bytes with metadata written: 428711936 00:10:38.370 Media bytes with metadata written: 428777472 00:10:38.370 Media bytes erased: 0 00:10:38.370 00:10:38.370 FDP events log page 00:10:38.370 =================== 00:10:38.370 Number of FDP events: 0 00:10:38.370 00:10:38.628 ************************************ 00:10:38.628 END TEST nvme_identify 00:10:38.628 ************************************ 00:10:38.628 00:10:38.628 real 0m1.531s 00:10:38.628 user 0m0.624s 00:10:38.628 sys 0m0.713s 00:10:38.628 09:59:27 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:38.628 09:59:27 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:10:38.628 09:59:27 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:38.628 09:59:27 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:38.628 09:59:27 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:38.628 09:59:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:38.628 ************************************ 00:10:38.628 START TEST nvme_perf 00:10:38.628 ************************************ 00:10:38.628 09:59:27 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # nvme_perf 00:10:38.628 09:59:27 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:40.013 Initializing NVMe Controllers 00:10:40.013 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:40.013 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:40.013 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:40.013 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:40.013 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:40.013 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:40.013 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:40.013 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:40.013 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:40.013 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:40.013 Initialization complete. Launching workers. 00:10:40.013 ======================================================== 00:10:40.013 Latency(us) 00:10:40.013 Device Information : IOPS MiB/s Average min max 00:10:40.013 PCIE (0000:00:10.0) NSID 1 from core 0: 13681.78 160.33 9359.61 7711.58 50263.79 00:10:40.013 PCIE (0000:00:11.0) NSID 1 from core 0: 13681.78 160.33 9322.90 7747.29 46191.77 00:10:40.013 PCIE (0000:00:13.0) NSID 1 from core 0: 13681.78 160.33 9284.10 7738.76 42446.34 00:10:40.013 PCIE (0000:00:12.0) NSID 1 from core 0: 13681.78 160.33 9244.37 7713.74 38149.78 00:10:40.013 PCIE (0000:00:12.0) NSID 2 from core 0: 13681.78 160.33 9204.58 7746.49 33847.33 00:10:40.013 PCIE (0000:00:12.0) NSID 3 from core 0: 13681.78 160.33 9163.60 7743.60 29411.27 00:10:40.013 ======================================================== 00:10:40.013 Total : 82090.71 962.00 9263.19 7711.58 50263.79 00:10:40.013 00:10:40.013 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:40.013 ================================================================================= 00:10:40.013 1.00000% : 7923.898us 00:10:40.013 10.00000% : 8162.211us 00:10:40.013 25.00000% : 8460.102us 00:10:40.013 50.00000% : 8877.149us 00:10:40.013 75.00000% : 9413.353us 00:10:40.013 90.00000% : 10307.025us 00:10:40.013 95.00000% : 10902.807us 00:10:40.013 98.00000% : 12511.418us 00:10:40.013 99.00000% : 13881.716us 00:10:40.013 99.50000% : 41943.040us 00:10:40.013 99.90000% : 49569.047us 00:10:40.013 99.99000% : 50283.985us 00:10:40.013 99.99900% : 50283.985us 00:10:40.013 99.99990% : 50283.985us 00:10:40.013 99.99999% : 50283.985us 00:10:40.013 00:10:40.013 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:40.013 ================================================================================= 00:10:40.013 1.00000% : 7983.476us 00:10:40.013 10.00000% : 8221.789us 00:10:40.013 25.00000% : 8460.102us 00:10:40.013 50.00000% : 8817.571us 00:10:40.013 75.00000% : 9413.353us 00:10:40.013 90.00000% : 10247.447us 00:10:40.013 95.00000% : 10843.229us 00:10:40.013 98.00000% : 12332.684us 00:10:40.013 99.00000% : 14179.607us 00:10:40.013 99.50000% : 38368.349us 00:10:40.013 99.90000% : 45756.044us 00:10:40.013 99.99000% : 46232.669us 00:10:40.014 99.99900% : 46232.669us 00:10:40.014 99.99990% : 46232.669us 00:10:40.014 99.99999% : 46232.669us 00:10:40.014 00:10:40.014 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:40.014 ================================================================================= 00:10:40.014 1.00000% : 7983.476us 00:10:40.014 10.00000% : 8221.789us 00:10:40.014 25.00000% : 8460.102us 00:10:40.014 50.00000% : 8817.571us 00:10:40.014 75.00000% : 9413.353us 00:10:40.014 90.00000% : 10247.447us 00:10:40.014 95.00000% : 10902.807us 00:10:40.014 98.00000% : 12332.684us 00:10:40.014 99.00000% : 13762.560us 00:10:40.014 99.50000% : 34317.033us 00:10:40.014 99.90000% : 41943.040us 00:10:40.014 99.99000% : 42419.665us 00:10:40.014 99.99900% : 42657.978us 00:10:40.014 99.99990% : 42657.978us 00:10:40.014 99.99999% : 42657.978us 00:10:40.014 00:10:40.014 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:40.014 ================================================================================= 00:10:40.014 1.00000% : 7983.476us 00:10:40.014 10.00000% : 8221.789us 00:10:40.014 25.00000% : 8460.102us 00:10:40.014 50.00000% : 8817.571us 00:10:40.014 75.00000% : 9413.353us 00:10:40.014 90.00000% : 10247.447us 00:10:40.014 95.00000% : 10843.229us 00:10:40.014 98.00000% : 12451.840us 00:10:40.014 99.00000% : 13524.247us 00:10:40.014 99.50000% : 30027.404us 00:10:40.014 99.90000% : 37653.411us 00:10:40.014 99.99000% : 38130.036us 00:10:40.014 99.99900% : 38368.349us 00:10:40.014 99.99990% : 38368.349us 00:10:40.014 99.99999% : 38368.349us 00:10:40.014 00:10:40.014 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:40.014 ================================================================================= 00:10:40.014 1.00000% : 7983.476us 00:10:40.014 10.00000% : 8221.789us 00:10:40.014 25.00000% : 8460.102us 00:10:40.014 50.00000% : 8817.571us 00:10:40.014 75.00000% : 9413.353us 00:10:40.014 90.00000% : 10247.447us 00:10:40.014 95.00000% : 10783.651us 00:10:40.014 98.00000% : 12511.418us 00:10:40.014 99.00000% : 13524.247us 00:10:40.014 99.50000% : 25737.775us 00:10:40.014 99.90000% : 33363.782us 00:10:40.014 99.99000% : 33840.407us 00:10:40.014 99.99900% : 34078.720us 00:10:40.014 99.99990% : 34078.720us 00:10:40.014 99.99999% : 34078.720us 00:10:40.014 00:10:40.014 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:40.014 ================================================================================= 00:10:40.014 1.00000% : 7983.476us 00:10:40.014 10.00000% : 8221.789us 00:10:40.014 25.00000% : 8460.102us 00:10:40.014 50.00000% : 8817.571us 00:10:40.014 75.00000% : 9413.353us 00:10:40.014 90.00000% : 10247.447us 00:10:40.014 95.00000% : 10783.651us 00:10:40.014 98.00000% : 12511.418us 00:10:40.014 99.00000% : 13464.669us 00:10:40.014 99.50000% : 21209.833us 00:10:40.014 99.90000% : 28835.840us 00:10:40.014 99.99000% : 29431.622us 00:10:40.014 99.99900% : 29431.622us 00:10:40.014 99.99990% : 29431.622us 00:10:40.014 99.99999% : 29431.622us 00:10:40.014 00:10:40.014 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:40.014 ============================================================================== 00:10:40.014 Range in us Cumulative IO count 00:10:40.014 7685.585 - 7745.164: 0.0730% ( 10) 00:10:40.014 7745.164 - 7804.742: 0.2775% ( 28) 00:10:40.014 7804.742 - 7864.320: 0.9054% ( 86) 00:10:40.014 7864.320 - 7923.898: 1.9057% ( 137) 00:10:40.014 7923.898 - 7983.476: 3.4901% ( 217) 00:10:40.014 7983.476 - 8043.055: 5.6659% ( 298) 00:10:40.014 8043.055 - 8102.633: 8.2360% ( 352) 00:10:40.014 8102.633 - 8162.211: 11.2442% ( 412) 00:10:40.014 8162.211 - 8221.789: 14.4203% ( 435) 00:10:40.014 8221.789 - 8281.367: 17.7497% ( 456) 00:10:40.014 8281.367 - 8340.945: 21.1303% ( 463) 00:10:40.014 8340.945 - 8400.524: 24.4451% ( 454) 00:10:40.014 8400.524 - 8460.102: 27.7745% ( 456) 00:10:40.014 8460.102 - 8519.680: 31.2354% ( 474) 00:10:40.014 8519.680 - 8579.258: 34.6598% ( 469) 00:10:40.014 8579.258 - 8638.836: 38.1425% ( 477) 00:10:40.014 8638.836 - 8698.415: 41.8954% ( 514) 00:10:40.014 8698.415 - 8757.993: 45.7214% ( 524) 00:10:40.014 8757.993 - 8817.571: 49.5108% ( 519) 00:10:40.014 8817.571 - 8877.149: 53.2418% ( 511) 00:10:40.014 8877.149 - 8936.727: 56.8195% ( 490) 00:10:40.014 8936.727 - 8996.305: 60.1197% ( 452) 00:10:40.014 8996.305 - 9055.884: 63.2593% ( 430) 00:10:40.014 9055.884 - 9115.462: 65.9244% ( 365) 00:10:40.014 9115.462 - 9175.040: 68.2097% ( 313) 00:10:40.014 9175.040 - 9234.618: 70.3198% ( 289) 00:10:40.014 9234.618 - 9294.196: 72.2766% ( 268) 00:10:40.014 9294.196 - 9353.775: 74.0946% ( 249) 00:10:40.014 9353.775 - 9413.353: 75.6206% ( 209) 00:10:40.014 9413.353 - 9472.931: 76.8546% ( 169) 00:10:40.014 9472.931 - 9532.509: 78.1615% ( 179) 00:10:40.014 9532.509 - 9592.087: 79.3589% ( 164) 00:10:40.014 9592.087 - 9651.665: 80.3592% ( 137) 00:10:40.014 9651.665 - 9711.244: 81.4690% ( 152) 00:10:40.014 9711.244 - 9770.822: 82.5423% ( 147) 00:10:40.014 9770.822 - 9830.400: 83.5938% ( 144) 00:10:40.014 9830.400 - 9889.978: 84.6159% ( 140) 00:10:40.014 9889.978 - 9949.556: 85.6016% ( 135) 00:10:40.014 9949.556 - 10009.135: 86.5362% ( 128) 00:10:40.014 10009.135 - 10068.713: 87.3759% ( 115) 00:10:40.014 10068.713 - 10128.291: 88.2155% ( 115) 00:10:40.014 10128.291 - 10187.869: 88.9895% ( 106) 00:10:40.014 10187.869 - 10247.447: 89.7415% ( 103) 00:10:40.014 10247.447 - 10307.025: 90.4279% ( 94) 00:10:40.014 10307.025 - 10366.604: 91.0777% ( 89) 00:10:40.014 10366.604 - 10426.182: 91.6837% ( 83) 00:10:40.014 10426.182 - 10485.760: 92.1802% ( 68) 00:10:40.014 10485.760 - 10545.338: 92.6694% ( 67) 00:10:40.014 10545.338 - 10604.916: 93.1586% ( 67) 00:10:40.014 10604.916 - 10664.495: 93.6551% ( 68) 00:10:40.014 10664.495 - 10724.073: 94.1151% ( 63) 00:10:40.014 10724.073 - 10783.651: 94.5093% ( 54) 00:10:40.014 10783.651 - 10843.229: 94.8890% ( 52) 00:10:40.014 10843.229 - 10902.807: 95.0862% ( 27) 00:10:40.014 10902.807 - 10962.385: 95.3198% ( 32) 00:10:40.014 10962.385 - 11021.964: 95.4658% ( 20) 00:10:40.014 11021.964 - 11081.542: 95.6046% ( 19) 00:10:40.014 11081.542 - 11141.120: 95.7433% ( 19) 00:10:40.014 11141.120 - 11200.698: 95.8601% ( 16) 00:10:40.014 11200.698 - 11260.276: 95.9623% ( 14) 00:10:40.014 11260.276 - 11319.855: 96.1230% ( 22) 00:10:40.014 11319.855 - 11379.433: 96.2398% ( 16) 00:10:40.014 11379.433 - 11439.011: 96.3566% ( 16) 00:10:40.014 11439.011 - 11498.589: 96.4515% ( 13) 00:10:40.014 11498.589 - 11558.167: 96.5610% ( 15) 00:10:40.014 11558.167 - 11617.745: 96.6341% ( 10) 00:10:40.014 11617.745 - 11677.324: 96.7728% ( 19) 00:10:40.014 11677.324 - 11736.902: 96.8750% ( 14) 00:10:40.014 11736.902 - 11796.480: 96.9845% ( 15) 00:10:40.014 11796.480 - 11856.058: 97.0794% ( 13) 00:10:40.014 11856.058 - 11915.636: 97.2109% ( 18) 00:10:40.014 11915.636 - 11975.215: 97.3277% ( 16) 00:10:40.014 11975.215 - 12034.793: 97.4372% ( 15) 00:10:40.014 12034.793 - 12094.371: 97.5248% ( 12) 00:10:40.014 12094.371 - 12153.949: 97.6343% ( 15) 00:10:40.014 12153.949 - 12213.527: 97.7147% ( 11) 00:10:40.014 12213.527 - 12273.105: 97.7731% ( 8) 00:10:40.014 12273.105 - 12332.684: 97.8607% ( 12) 00:10:40.014 12332.684 - 12392.262: 97.9410% ( 11) 00:10:40.014 12392.262 - 12451.840: 97.9994% ( 8) 00:10:40.014 12451.840 - 12511.418: 98.0943% ( 13) 00:10:40.014 12511.418 - 12570.996: 98.1673% ( 10) 00:10:40.014 12570.996 - 12630.575: 98.2696% ( 14) 00:10:40.014 12630.575 - 12690.153: 98.3353% ( 9) 00:10:40.014 12690.153 - 12749.731: 98.4010% ( 9) 00:10:40.014 12749.731 - 12809.309: 98.4667% ( 9) 00:10:40.014 12809.309 - 12868.887: 98.5178% ( 7) 00:10:40.014 12868.887 - 12928.465: 98.5908% ( 10) 00:10:40.014 12928.465 - 12988.044: 98.6492% ( 8) 00:10:40.014 12988.044 - 13047.622: 98.7150% ( 9) 00:10:40.014 13047.622 - 13107.200: 98.7661% ( 7) 00:10:40.014 13107.200 - 13166.778: 98.8245% ( 8) 00:10:40.014 13166.778 - 13226.356: 98.8537% ( 4) 00:10:40.014 13226.356 - 13285.935: 98.8683% ( 2) 00:10:40.014 13285.935 - 13345.513: 98.8902% ( 3) 00:10:40.014 13345.513 - 13405.091: 98.8975% ( 1) 00:10:40.014 13405.091 - 13464.669: 98.9121% ( 2) 00:10:40.014 13464.669 - 13524.247: 98.9267% ( 2) 00:10:40.014 13524.247 - 13583.825: 98.9340% ( 1) 00:10:40.014 13583.825 - 13643.404: 98.9486% ( 2) 00:10:40.014 13643.404 - 13702.982: 98.9705% ( 3) 00:10:40.014 13702.982 - 13762.560: 98.9778% ( 1) 00:10:40.014 13762.560 - 13822.138: 98.9924% ( 2) 00:10:40.014 13822.138 - 13881.716: 99.0070% ( 2) 00:10:40.014 13881.716 - 13941.295: 99.0143% ( 1) 00:10:40.014 13941.295 - 14000.873: 99.0289% ( 2) 00:10:40.014 14000.873 - 14060.451: 99.0508% ( 3) 00:10:40.014 14060.451 - 14120.029: 99.0654% ( 2) 00:10:40.014 38606.662 - 38844.975: 99.0800% ( 2) 00:10:40.014 38844.975 - 39083.287: 99.1165% ( 5) 00:10:40.014 39083.287 - 39321.600: 99.1530% ( 5) 00:10:40.014 39321.600 - 39559.913: 99.1822% ( 4) 00:10:40.014 39559.913 - 39798.225: 99.2188% ( 5) 00:10:40.014 39798.225 - 40036.538: 99.2407% ( 3) 00:10:40.014 40036.538 - 40274.851: 99.2699% ( 4) 00:10:40.014 40274.851 - 40513.164: 99.3064% ( 5) 00:10:40.014 40513.164 - 40751.476: 99.3429% ( 5) 00:10:40.014 40751.476 - 40989.789: 99.3721% ( 4) 00:10:40.014 40989.789 - 41228.102: 99.4086% ( 5) 00:10:40.014 41228.102 - 41466.415: 99.4378% ( 4) 00:10:40.014 41466.415 - 41704.727: 99.4743% ( 5) 00:10:40.014 41704.727 - 41943.040: 99.5035% ( 4) 00:10:40.014 41943.040 - 42181.353: 99.5327% ( 4) 00:10:40.014 46709.295 - 46947.607: 99.5400% ( 1) 00:10:40.014 46947.607 - 47185.920: 99.5765% ( 5) 00:10:40.014 47185.920 - 47424.233: 99.6057% ( 4) 00:10:40.015 47424.233 - 47662.545: 99.6422% ( 5) 00:10:40.015 47662.545 - 47900.858: 99.6714% ( 4) 00:10:40.015 47900.858 - 48139.171: 99.7079% ( 5) 00:10:40.015 48139.171 - 48377.484: 99.7371% ( 4) 00:10:40.015 48377.484 - 48615.796: 99.7737% ( 5) 00:10:40.015 48615.796 - 48854.109: 99.8029% ( 4) 00:10:40.015 48854.109 - 49092.422: 99.8394% ( 5) 00:10:40.015 49092.422 - 49330.735: 99.8686% ( 4) 00:10:40.015 49330.735 - 49569.047: 99.9051% ( 5) 00:10:40.015 49569.047 - 49807.360: 99.9343% ( 4) 00:10:40.015 49807.360 - 50045.673: 99.9708% ( 5) 00:10:40.015 50045.673 - 50283.985: 100.0000% ( 4) 00:10:40.015 00:10:40.015 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:40.015 ============================================================================== 00:10:40.015 Range in us Cumulative IO count 00:10:40.015 7745.164 - 7804.742: 0.0365% ( 5) 00:10:40.015 7804.742 - 7864.320: 0.2190% ( 25) 00:10:40.015 7864.320 - 7923.898: 0.6060% ( 53) 00:10:40.015 7923.898 - 7983.476: 1.4165% ( 111) 00:10:40.015 7983.476 - 8043.055: 2.8841% ( 201) 00:10:40.015 8043.055 - 8102.633: 4.8700% ( 272) 00:10:40.015 8102.633 - 8162.211: 7.6300% ( 378) 00:10:40.015 8162.211 - 8221.789: 10.8353% ( 439) 00:10:40.015 8221.789 - 8281.367: 14.3984% ( 488) 00:10:40.015 8281.367 - 8340.945: 18.2973% ( 534) 00:10:40.015 8340.945 - 8400.524: 22.4226% ( 565) 00:10:40.015 8400.524 - 8460.102: 26.4019% ( 545) 00:10:40.015 8460.102 - 8519.680: 30.4907% ( 560) 00:10:40.015 8519.680 - 8579.258: 34.5064% ( 550) 00:10:40.015 8579.258 - 8638.836: 38.6682% ( 570) 00:10:40.015 8638.836 - 8698.415: 42.8884% ( 578) 00:10:40.015 8698.415 - 8757.993: 46.9991% ( 563) 00:10:40.015 8757.993 - 8817.571: 51.2120% ( 577) 00:10:40.015 8817.571 - 8877.149: 55.2935% ( 559) 00:10:40.015 8877.149 - 8936.727: 58.9953% ( 507) 00:10:40.015 8936.727 - 8996.305: 62.3102% ( 454) 00:10:40.015 8996.305 - 9055.884: 65.0701% ( 378) 00:10:40.015 9055.884 - 9115.462: 67.6256% ( 350) 00:10:40.015 9115.462 - 9175.040: 69.6773% ( 281) 00:10:40.015 9175.040 - 9234.618: 71.5318% ( 254) 00:10:40.015 9234.618 - 9294.196: 72.9337% ( 192) 00:10:40.015 9294.196 - 9353.775: 74.1968% ( 173) 00:10:40.015 9353.775 - 9413.353: 75.3578% ( 159) 00:10:40.015 9413.353 - 9472.931: 76.5406% ( 162) 00:10:40.015 9472.931 - 9532.509: 77.7307% ( 163) 00:10:40.015 9532.509 - 9592.087: 78.9355% ( 165) 00:10:40.015 9592.087 - 9651.665: 80.2278% ( 177) 00:10:40.015 9651.665 - 9711.244: 81.5055% ( 175) 00:10:40.015 9711.244 - 9770.822: 82.7103% ( 165) 00:10:40.015 9770.822 - 9830.400: 83.8566% ( 157) 00:10:40.015 9830.400 - 9889.978: 84.9591% ( 151) 00:10:40.015 9889.978 - 9949.556: 85.9886% ( 141) 00:10:40.015 9949.556 - 10009.135: 86.9305% ( 129) 00:10:40.015 10009.135 - 10068.713: 87.8724% ( 129) 00:10:40.015 10068.713 - 10128.291: 88.7193% ( 116) 00:10:40.015 10128.291 - 10187.869: 89.5517% ( 114) 00:10:40.015 10187.869 - 10247.447: 90.3475% ( 109) 00:10:40.015 10247.447 - 10307.025: 91.0631% ( 98) 00:10:40.015 10307.025 - 10366.604: 91.6910% ( 86) 00:10:40.015 10366.604 - 10426.182: 92.2605% ( 78) 00:10:40.015 10426.182 - 10485.760: 92.7716% ( 70) 00:10:40.015 10485.760 - 10545.338: 93.3192% ( 75) 00:10:40.015 10545.338 - 10604.916: 93.8595% ( 74) 00:10:40.015 10604.916 - 10664.495: 94.3049% ( 61) 00:10:40.015 10664.495 - 10724.073: 94.6773% ( 51) 00:10:40.015 10724.073 - 10783.651: 94.9255% ( 34) 00:10:40.015 10783.651 - 10843.229: 95.1592% ( 32) 00:10:40.015 10843.229 - 10902.807: 95.3782% ( 30) 00:10:40.015 10902.807 - 10962.385: 95.5388% ( 22) 00:10:40.015 10962.385 - 11021.964: 95.6995% ( 22) 00:10:40.015 11021.964 - 11081.542: 95.8528% ( 21) 00:10:40.015 11081.542 - 11141.120: 95.9915% ( 19) 00:10:40.015 11141.120 - 11200.698: 96.1084% ( 16) 00:10:40.015 11200.698 - 11260.276: 96.2033% ( 13) 00:10:40.015 11260.276 - 11319.855: 96.3274% ( 17) 00:10:40.015 11319.855 - 11379.433: 96.4369% ( 15) 00:10:40.015 11379.433 - 11439.011: 96.5683% ( 18) 00:10:40.015 11439.011 - 11498.589: 96.6341% ( 9) 00:10:40.015 11498.589 - 11558.167: 96.7144% ( 11) 00:10:40.015 11558.167 - 11617.745: 96.7801% ( 9) 00:10:40.015 11617.745 - 11677.324: 96.8385% ( 8) 00:10:40.015 11677.324 - 11736.902: 96.9553% ( 16) 00:10:40.015 11736.902 - 11796.480: 97.0867% ( 18) 00:10:40.015 11796.480 - 11856.058: 97.1890% ( 14) 00:10:40.015 11856.058 - 11915.636: 97.3131% ( 17) 00:10:40.015 11915.636 - 11975.215: 97.4153% ( 14) 00:10:40.015 11975.215 - 12034.793: 97.5175% ( 14) 00:10:40.015 12034.793 - 12094.371: 97.6051% ( 12) 00:10:40.015 12094.371 - 12153.949: 97.7220% ( 16) 00:10:40.015 12153.949 - 12213.527: 97.8096% ( 12) 00:10:40.015 12213.527 - 12273.105: 97.9045% ( 13) 00:10:40.015 12273.105 - 12332.684: 98.0067% ( 14) 00:10:40.015 12332.684 - 12392.262: 98.1089% ( 14) 00:10:40.015 12392.262 - 12451.840: 98.1966% ( 12) 00:10:40.015 12451.840 - 12511.418: 98.2696% ( 10) 00:10:40.015 12511.418 - 12570.996: 98.3134% ( 6) 00:10:40.015 12570.996 - 12630.575: 98.3572% ( 6) 00:10:40.015 12630.575 - 12690.153: 98.4156% ( 8) 00:10:40.015 12690.153 - 12749.731: 98.4667% ( 7) 00:10:40.015 12749.731 - 12809.309: 98.5178% ( 7) 00:10:40.015 12809.309 - 12868.887: 98.5835% ( 9) 00:10:40.015 12868.887 - 12928.465: 98.6419% ( 8) 00:10:40.015 12928.465 - 12988.044: 98.6857% ( 6) 00:10:40.015 12988.044 - 13047.622: 98.7150% ( 4) 00:10:40.015 13047.622 - 13107.200: 98.7296% ( 2) 00:10:40.015 13107.200 - 13166.778: 98.7515% ( 3) 00:10:40.015 13166.778 - 13226.356: 98.7661% ( 2) 00:10:40.015 13226.356 - 13285.935: 98.7807% ( 2) 00:10:40.015 13285.935 - 13345.513: 98.7953% ( 2) 00:10:40.015 13345.513 - 13405.091: 98.8099% ( 2) 00:10:40.015 13405.091 - 13464.669: 98.8245% ( 2) 00:10:40.015 13464.669 - 13524.247: 98.8318% ( 1) 00:10:40.015 13524.247 - 13583.825: 98.8464% ( 2) 00:10:40.015 13583.825 - 13643.404: 98.8610% ( 2) 00:10:40.015 13643.404 - 13702.982: 98.8756% ( 2) 00:10:40.015 13702.982 - 13762.560: 98.8975% ( 3) 00:10:40.015 13762.560 - 13822.138: 98.9121% ( 2) 00:10:40.015 13822.138 - 13881.716: 98.9267% ( 2) 00:10:40.015 13881.716 - 13941.295: 98.9486% ( 3) 00:10:40.015 13941.295 - 14000.873: 98.9632% ( 2) 00:10:40.015 14000.873 - 14060.451: 98.9778% ( 2) 00:10:40.015 14060.451 - 14120.029: 98.9997% ( 3) 00:10:40.015 14120.029 - 14179.607: 99.0143% ( 2) 00:10:40.015 14179.607 - 14239.185: 99.0289% ( 2) 00:10:40.015 14239.185 - 14298.764: 99.0508% ( 3) 00:10:40.015 14298.764 - 14358.342: 99.0654% ( 2) 00:10:40.015 35031.971 - 35270.284: 99.0873% ( 3) 00:10:40.015 35270.284 - 35508.596: 99.1165% ( 4) 00:10:40.015 35508.596 - 35746.909: 99.1530% ( 5) 00:10:40.015 35746.909 - 35985.222: 99.1822% ( 4) 00:10:40.015 35985.222 - 36223.535: 99.2188% ( 5) 00:10:40.015 36223.535 - 36461.847: 99.2553% ( 5) 00:10:40.015 36461.847 - 36700.160: 99.2918% ( 5) 00:10:40.015 36700.160 - 36938.473: 99.3283% ( 5) 00:10:40.015 36938.473 - 37176.785: 99.3575% ( 4) 00:10:40.015 37176.785 - 37415.098: 99.3940% ( 5) 00:10:40.015 37415.098 - 37653.411: 99.4305% ( 5) 00:10:40.015 37653.411 - 37891.724: 99.4524% ( 3) 00:10:40.015 37891.724 - 38130.036: 99.4889% ( 5) 00:10:40.015 38130.036 - 38368.349: 99.5254% ( 5) 00:10:40.015 38368.349 - 38606.662: 99.5327% ( 1) 00:10:40.015 42896.291 - 43134.604: 99.5546% ( 3) 00:10:40.015 43134.604 - 43372.916: 99.5911% ( 5) 00:10:40.015 43372.916 - 43611.229: 99.6276% ( 5) 00:10:40.015 43611.229 - 43849.542: 99.6495% ( 3) 00:10:40.015 43849.542 - 44087.855: 99.6860% ( 5) 00:10:40.015 44087.855 - 44326.167: 99.7225% ( 5) 00:10:40.015 44326.167 - 44564.480: 99.7591% ( 5) 00:10:40.015 44564.480 - 44802.793: 99.7883% ( 4) 00:10:40.015 44802.793 - 45041.105: 99.8248% ( 5) 00:10:40.015 45041.105 - 45279.418: 99.8613% ( 5) 00:10:40.015 45279.418 - 45517.731: 99.8978% ( 5) 00:10:40.015 45517.731 - 45756.044: 99.9343% ( 5) 00:10:40.015 45756.044 - 45994.356: 99.9708% ( 5) 00:10:40.015 45994.356 - 46232.669: 100.0000% ( 4) 00:10:40.015 00:10:40.015 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:40.015 ============================================================================== 00:10:40.015 Range in us Cumulative IO count 00:10:40.015 7685.585 - 7745.164: 0.0073% ( 1) 00:10:40.015 7745.164 - 7804.742: 0.0876% ( 11) 00:10:40.015 7804.742 - 7864.320: 0.2994% ( 29) 00:10:40.015 7864.320 - 7923.898: 0.6498% ( 48) 00:10:40.015 7923.898 - 7983.476: 1.4603% ( 111) 00:10:40.015 7983.476 - 8043.055: 2.8183% ( 186) 00:10:40.015 8043.055 - 8102.633: 4.8919% ( 284) 00:10:40.015 8102.633 - 8162.211: 7.6154% ( 373) 00:10:40.015 8162.211 - 8221.789: 10.9594% ( 458) 00:10:40.015 8221.789 - 8281.367: 14.6466% ( 505) 00:10:40.015 8281.367 - 8340.945: 18.5164% ( 530) 00:10:40.015 8340.945 - 8400.524: 22.5102% ( 547) 00:10:40.015 8400.524 - 8460.102: 26.4457% ( 539) 00:10:40.015 8460.102 - 8519.680: 30.5272% ( 559) 00:10:40.015 8519.680 - 8579.258: 34.7036% ( 572) 00:10:40.015 8579.258 - 8638.836: 38.7631% ( 556) 00:10:40.015 8638.836 - 8698.415: 43.0126% ( 582) 00:10:40.015 8698.415 - 8757.993: 47.1525% ( 567) 00:10:40.015 8757.993 - 8817.571: 51.3143% ( 570) 00:10:40.015 8817.571 - 8877.149: 55.3008% ( 546) 00:10:40.015 8877.149 - 8936.727: 59.0610% ( 515) 00:10:40.015 8936.727 - 8996.305: 62.3394% ( 449) 00:10:40.015 8996.305 - 9055.884: 65.2599% ( 400) 00:10:40.015 9055.884 - 9115.462: 67.6621% ( 329) 00:10:40.015 9115.462 - 9175.040: 69.8014% ( 293) 00:10:40.015 9175.040 - 9234.618: 71.5902% ( 245) 00:10:40.016 9234.618 - 9294.196: 73.0505% ( 200) 00:10:40.016 9294.196 - 9353.775: 74.3575% ( 179) 00:10:40.016 9353.775 - 9413.353: 75.6352% ( 175) 00:10:40.016 9413.353 - 9472.931: 76.7523% ( 153) 00:10:40.016 9472.931 - 9532.509: 77.9717% ( 167) 00:10:40.016 9532.509 - 9592.087: 79.1837% ( 166) 00:10:40.016 9592.087 - 9651.665: 80.4030% ( 167) 00:10:40.016 9651.665 - 9711.244: 81.6589% ( 172) 00:10:40.016 9711.244 - 9770.822: 82.8928% ( 169) 00:10:40.016 9770.822 - 9830.400: 84.0464% ( 158) 00:10:40.016 9830.400 - 9889.978: 85.1270% ( 148) 00:10:40.016 9889.978 - 9949.556: 86.1127% ( 135) 00:10:40.016 9949.556 - 10009.135: 87.0327% ( 126) 00:10:40.016 10009.135 - 10068.713: 87.9308% ( 123) 00:10:40.016 10068.713 - 10128.291: 88.7558% ( 113) 00:10:40.016 10128.291 - 10187.869: 89.5663% ( 111) 00:10:40.016 10187.869 - 10247.447: 90.3475% ( 107) 00:10:40.016 10247.447 - 10307.025: 91.0412% ( 95) 00:10:40.016 10307.025 - 10366.604: 91.6910% ( 89) 00:10:40.016 10366.604 - 10426.182: 92.2751% ( 80) 00:10:40.016 10426.182 - 10485.760: 92.8008% ( 72) 00:10:40.016 10485.760 - 10545.338: 93.3046% ( 69) 00:10:40.016 10545.338 - 10604.916: 93.7719% ( 64) 00:10:40.016 10604.916 - 10664.495: 94.2027% ( 59) 00:10:40.016 10664.495 - 10724.073: 94.5386% ( 46) 00:10:40.016 10724.073 - 10783.651: 94.7649% ( 31) 00:10:40.016 10783.651 - 10843.229: 94.9985% ( 32) 00:10:40.016 10843.229 - 10902.807: 95.2030% ( 28) 00:10:40.016 10902.807 - 10962.385: 95.3709% ( 23) 00:10:40.016 10962.385 - 11021.964: 95.5680% ( 27) 00:10:40.016 11021.964 - 11081.542: 95.7068% ( 19) 00:10:40.016 11081.542 - 11141.120: 95.8455% ( 19) 00:10:40.016 11141.120 - 11200.698: 95.9623% ( 16) 00:10:40.016 11200.698 - 11260.276: 96.0864% ( 17) 00:10:40.016 11260.276 - 11319.855: 96.2033% ( 16) 00:10:40.016 11319.855 - 11379.433: 96.3347% ( 18) 00:10:40.016 11379.433 - 11439.011: 96.4515% ( 16) 00:10:40.016 11439.011 - 11498.589: 96.5537% ( 14) 00:10:40.016 11498.589 - 11558.167: 96.6560% ( 14) 00:10:40.016 11558.167 - 11617.745: 96.7509% ( 13) 00:10:40.016 11617.745 - 11677.324: 96.8604% ( 15) 00:10:40.016 11677.324 - 11736.902: 96.9845% ( 17) 00:10:40.016 11736.902 - 11796.480: 97.0940% ( 15) 00:10:40.016 11796.480 - 11856.058: 97.2182% ( 17) 00:10:40.016 11856.058 - 11915.636: 97.3350% ( 16) 00:10:40.016 11915.636 - 11975.215: 97.4299% ( 13) 00:10:40.016 11975.215 - 12034.793: 97.5394% ( 15) 00:10:40.016 12034.793 - 12094.371: 97.6197% ( 11) 00:10:40.016 12094.371 - 12153.949: 97.7220% ( 14) 00:10:40.016 12153.949 - 12213.527: 97.8388% ( 16) 00:10:40.016 12213.527 - 12273.105: 97.9191% ( 11) 00:10:40.016 12273.105 - 12332.684: 98.0213% ( 14) 00:10:40.016 12332.684 - 12392.262: 98.1235% ( 14) 00:10:40.016 12392.262 - 12451.840: 98.2112% ( 12) 00:10:40.016 12451.840 - 12511.418: 98.2769% ( 9) 00:10:40.016 12511.418 - 12570.996: 98.3426% ( 9) 00:10:40.016 12570.996 - 12630.575: 98.4156% ( 10) 00:10:40.016 12630.575 - 12690.153: 98.4740% ( 8) 00:10:40.016 12690.153 - 12749.731: 98.5470% ( 10) 00:10:40.016 12749.731 - 12809.309: 98.6054% ( 8) 00:10:40.016 12809.309 - 12868.887: 98.6638% ( 8) 00:10:40.016 12868.887 - 12928.465: 98.6930% ( 4) 00:10:40.016 12928.465 - 12988.044: 98.7296% ( 5) 00:10:40.016 12988.044 - 13047.622: 98.7515% ( 3) 00:10:40.016 13047.622 - 13107.200: 98.7734% ( 3) 00:10:40.016 13107.200 - 13166.778: 98.7880% ( 2) 00:10:40.016 13166.778 - 13226.356: 98.8172% ( 4) 00:10:40.016 13226.356 - 13285.935: 98.8391% ( 3) 00:10:40.016 13285.935 - 13345.513: 98.8610% ( 3) 00:10:40.016 13345.513 - 13405.091: 98.8829% ( 3) 00:10:40.016 13405.091 - 13464.669: 98.9048% ( 3) 00:10:40.016 13464.669 - 13524.247: 98.9267% ( 3) 00:10:40.016 13524.247 - 13583.825: 98.9486% ( 3) 00:10:40.016 13583.825 - 13643.404: 98.9705% ( 3) 00:10:40.016 13643.404 - 13702.982: 98.9924% ( 3) 00:10:40.016 13702.982 - 13762.560: 99.0143% ( 3) 00:10:40.016 13762.560 - 13822.138: 99.0435% ( 4) 00:10:40.016 13822.138 - 13881.716: 99.0581% ( 2) 00:10:40.016 13881.716 - 13941.295: 99.0654% ( 1) 00:10:40.016 30980.655 - 31218.967: 99.0727% ( 1) 00:10:40.016 31218.967 - 31457.280: 99.1019% ( 4) 00:10:40.016 31457.280 - 31695.593: 99.1311% ( 4) 00:10:40.016 31695.593 - 31933.905: 99.1676% ( 5) 00:10:40.016 31933.905 - 32172.218: 99.2041% ( 5) 00:10:40.016 32172.218 - 32410.531: 99.2334% ( 4) 00:10:40.016 32410.531 - 32648.844: 99.2699% ( 5) 00:10:40.016 32648.844 - 32887.156: 99.3137% ( 6) 00:10:40.016 32887.156 - 33125.469: 99.3429% ( 4) 00:10:40.016 33125.469 - 33363.782: 99.3794% ( 5) 00:10:40.016 33363.782 - 33602.095: 99.4159% ( 5) 00:10:40.016 33602.095 - 33840.407: 99.4451% ( 4) 00:10:40.016 33840.407 - 34078.720: 99.4816% ( 5) 00:10:40.016 34078.720 - 34317.033: 99.5181% ( 5) 00:10:40.016 34317.033 - 34555.345: 99.5327% ( 2) 00:10:40.016 39083.287 - 39321.600: 99.5546% ( 3) 00:10:40.016 39321.600 - 39559.913: 99.5911% ( 5) 00:10:40.016 39559.913 - 39798.225: 99.6203% ( 4) 00:10:40.016 39798.225 - 40036.538: 99.6495% ( 4) 00:10:40.016 40036.538 - 40274.851: 99.6860% ( 5) 00:10:40.016 40274.851 - 40513.164: 99.7152% ( 4) 00:10:40.016 40513.164 - 40751.476: 99.7518% ( 5) 00:10:40.016 40751.476 - 40989.789: 99.7883% ( 5) 00:10:40.016 40989.789 - 41228.102: 99.8175% ( 4) 00:10:40.016 41228.102 - 41466.415: 99.8467% ( 4) 00:10:40.016 41466.415 - 41704.727: 99.8832% ( 5) 00:10:40.016 41704.727 - 41943.040: 99.9197% ( 5) 00:10:40.016 41943.040 - 42181.353: 99.9562% ( 5) 00:10:40.016 42181.353 - 42419.665: 99.9927% ( 5) 00:10:40.016 42419.665 - 42657.978: 100.0000% ( 1) 00:10:40.016 00:10:40.016 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:40.016 ============================================================================== 00:10:40.016 Range in us Cumulative IO count 00:10:40.016 7685.585 - 7745.164: 0.0146% ( 2) 00:10:40.016 7745.164 - 7804.742: 0.0803% ( 9) 00:10:40.016 7804.742 - 7864.320: 0.2555% ( 24) 00:10:40.016 7864.320 - 7923.898: 0.6133% ( 49) 00:10:40.016 7923.898 - 7983.476: 1.2923% ( 93) 00:10:40.016 7983.476 - 8043.055: 2.6577% ( 187) 00:10:40.016 8043.055 - 8102.633: 4.6875% ( 278) 00:10:40.016 8102.633 - 8162.211: 7.4255% ( 375) 00:10:40.016 8162.211 - 8221.789: 10.7112% ( 450) 00:10:40.016 8221.789 - 8281.367: 14.3692% ( 501) 00:10:40.016 8281.367 - 8340.945: 18.2973% ( 538) 00:10:40.016 8340.945 - 8400.524: 22.4007% ( 562) 00:10:40.016 8400.524 - 8460.102: 26.5479% ( 568) 00:10:40.016 8460.102 - 8519.680: 30.5053% ( 542) 00:10:40.016 8519.680 - 8579.258: 34.6013% ( 561) 00:10:40.016 8579.258 - 8638.836: 38.6901% ( 560) 00:10:40.016 8638.836 - 8698.415: 42.8227% ( 566) 00:10:40.016 8698.415 - 8757.993: 47.0283% ( 576) 00:10:40.016 8757.993 - 8817.571: 51.2412% ( 577) 00:10:40.016 8817.571 - 8877.149: 55.2935% ( 555) 00:10:40.016 8877.149 - 8936.727: 59.0099% ( 509) 00:10:40.016 8936.727 - 8996.305: 62.4051% ( 465) 00:10:40.016 8996.305 - 9055.884: 65.3695% ( 406) 00:10:40.016 9055.884 - 9115.462: 67.7643% ( 328) 00:10:40.016 9115.462 - 9175.040: 69.8671% ( 288) 00:10:40.016 9175.040 - 9234.618: 71.5829% ( 235) 00:10:40.016 9234.618 - 9294.196: 73.0140% ( 196) 00:10:40.016 9294.196 - 9353.775: 74.2699% ( 172) 00:10:40.016 9353.775 - 9413.353: 75.4819% ( 166) 00:10:40.016 9413.353 - 9472.931: 76.7085% ( 168) 00:10:40.016 9472.931 - 9532.509: 77.9352% ( 168) 00:10:40.016 9532.509 - 9592.087: 79.2275% ( 177) 00:10:40.016 9592.087 - 9651.665: 80.4395% ( 166) 00:10:40.016 9651.665 - 9711.244: 81.6443% ( 165) 00:10:40.016 9711.244 - 9770.822: 82.8636% ( 167) 00:10:40.016 9770.822 - 9830.400: 84.0318% ( 160) 00:10:40.016 9830.400 - 9889.978: 85.1562% ( 154) 00:10:40.016 9889.978 - 9949.556: 86.1346% ( 134) 00:10:40.016 9949.556 - 10009.135: 87.0254% ( 122) 00:10:40.016 10009.135 - 10068.713: 87.9527% ( 127) 00:10:40.016 10068.713 - 10128.291: 88.7412% ( 108) 00:10:40.016 10128.291 - 10187.869: 89.6101% ( 119) 00:10:40.016 10187.869 - 10247.447: 90.3768% ( 105) 00:10:40.016 10247.447 - 10307.025: 91.0777% ( 96) 00:10:40.016 10307.025 - 10366.604: 91.7129% ( 87) 00:10:40.016 10366.604 - 10426.182: 92.3481% ( 87) 00:10:40.016 10426.182 - 10485.760: 92.9322% ( 80) 00:10:40.016 10485.760 - 10545.338: 93.5018% ( 78) 00:10:40.016 10545.338 - 10604.916: 93.9982% ( 68) 00:10:40.016 10604.916 - 10664.495: 94.3852% ( 53) 00:10:40.016 10664.495 - 10724.073: 94.6773% ( 40) 00:10:40.016 10724.073 - 10783.651: 94.9182% ( 33) 00:10:40.016 10783.651 - 10843.229: 95.1154% ( 27) 00:10:40.016 10843.229 - 10902.807: 95.3052% ( 26) 00:10:40.016 10902.807 - 10962.385: 95.4804% ( 24) 00:10:40.016 10962.385 - 11021.964: 95.6338% ( 21) 00:10:40.016 11021.964 - 11081.542: 95.7579% ( 17) 00:10:40.016 11081.542 - 11141.120: 95.9039% ( 20) 00:10:40.016 11141.120 - 11200.698: 96.0207% ( 16) 00:10:40.016 11200.698 - 11260.276: 96.1303% ( 15) 00:10:40.016 11260.276 - 11319.855: 96.2325% ( 14) 00:10:40.016 11319.855 - 11379.433: 96.3201% ( 12) 00:10:40.016 11379.433 - 11439.011: 96.3931% ( 10) 00:10:40.016 11439.011 - 11498.589: 96.4953% ( 14) 00:10:40.016 11498.589 - 11558.167: 96.5683% ( 10) 00:10:40.016 11558.167 - 11617.745: 96.6560% ( 12) 00:10:40.016 11617.745 - 11677.324: 96.7801% ( 17) 00:10:40.016 11677.324 - 11736.902: 96.9042% ( 17) 00:10:40.016 11736.902 - 11796.480: 96.9991% ( 13) 00:10:40.016 11796.480 - 11856.058: 97.1159% ( 16) 00:10:40.016 11856.058 - 11915.636: 97.2182% ( 14) 00:10:40.016 11915.636 - 11975.215: 97.3277% ( 15) 00:10:40.016 11975.215 - 12034.793: 97.4445% ( 16) 00:10:40.016 12034.793 - 12094.371: 97.5540% ( 15) 00:10:40.016 12094.371 - 12153.949: 97.6416% ( 12) 00:10:40.016 12153.949 - 12213.527: 97.7293% ( 12) 00:10:40.016 12213.527 - 12273.105: 97.8169% ( 12) 00:10:40.017 12273.105 - 12332.684: 97.9045% ( 12) 00:10:40.017 12332.684 - 12392.262: 97.9848% ( 11) 00:10:40.017 12392.262 - 12451.840: 98.0724% ( 12) 00:10:40.017 12451.840 - 12511.418: 98.1600% ( 12) 00:10:40.017 12511.418 - 12570.996: 98.2623% ( 14) 00:10:40.017 12570.996 - 12630.575: 98.3864% ( 17) 00:10:40.017 12630.575 - 12690.153: 98.4813% ( 13) 00:10:40.017 12690.153 - 12749.731: 98.5616% ( 11) 00:10:40.017 12749.731 - 12809.309: 98.6419% ( 11) 00:10:40.017 12809.309 - 12868.887: 98.7223% ( 11) 00:10:40.017 12868.887 - 12928.465: 98.7734% ( 7) 00:10:40.017 12928.465 - 12988.044: 98.8099% ( 5) 00:10:40.017 12988.044 - 13047.622: 98.8318% ( 3) 00:10:40.017 13047.622 - 13107.200: 98.8537% ( 3) 00:10:40.017 13107.200 - 13166.778: 98.8756% ( 3) 00:10:40.017 13166.778 - 13226.356: 98.8902% ( 2) 00:10:40.017 13226.356 - 13285.935: 98.9121% ( 3) 00:10:40.017 13285.935 - 13345.513: 98.9413% ( 4) 00:10:40.017 13345.513 - 13405.091: 98.9632% ( 3) 00:10:40.017 13405.091 - 13464.669: 98.9851% ( 3) 00:10:40.017 13464.669 - 13524.247: 99.0070% ( 3) 00:10:40.017 13524.247 - 13583.825: 99.0289% ( 3) 00:10:40.017 13583.825 - 13643.404: 99.0508% ( 3) 00:10:40.017 13643.404 - 13702.982: 99.0654% ( 2) 00:10:40.017 26810.182 - 26929.338: 99.0727% ( 1) 00:10:40.017 26929.338 - 27048.495: 99.0800% ( 1) 00:10:40.017 27048.495 - 27167.651: 99.1019% ( 3) 00:10:40.017 27167.651 - 27286.807: 99.1238% ( 3) 00:10:40.017 27286.807 - 27405.964: 99.1384% ( 2) 00:10:40.017 27405.964 - 27525.120: 99.1530% ( 2) 00:10:40.017 27525.120 - 27644.276: 99.1676% ( 2) 00:10:40.017 27644.276 - 27763.433: 99.1895% ( 3) 00:10:40.017 27763.433 - 27882.589: 99.2041% ( 2) 00:10:40.017 27882.589 - 28001.745: 99.2261% ( 3) 00:10:40.017 28001.745 - 28120.902: 99.2407% ( 2) 00:10:40.017 28120.902 - 28240.058: 99.2553% ( 2) 00:10:40.017 28240.058 - 28359.215: 99.2772% ( 3) 00:10:40.017 28359.215 - 28478.371: 99.2918% ( 2) 00:10:40.017 28478.371 - 28597.527: 99.3064% ( 2) 00:10:40.017 28597.527 - 28716.684: 99.3210% ( 2) 00:10:40.017 28716.684 - 28835.840: 99.3429% ( 3) 00:10:40.017 28835.840 - 28954.996: 99.3575% ( 2) 00:10:40.017 28954.996 - 29074.153: 99.3721% ( 2) 00:10:40.017 29074.153 - 29193.309: 99.3940% ( 3) 00:10:40.017 29193.309 - 29312.465: 99.4086% ( 2) 00:10:40.017 29312.465 - 29431.622: 99.4305% ( 3) 00:10:40.017 29431.622 - 29550.778: 99.4451% ( 2) 00:10:40.017 29550.778 - 29669.935: 99.4597% ( 2) 00:10:40.017 29669.935 - 29789.091: 99.4743% ( 2) 00:10:40.017 29789.091 - 29908.247: 99.4889% ( 2) 00:10:40.017 29908.247 - 30027.404: 99.5108% ( 3) 00:10:40.017 30027.404 - 30146.560: 99.5254% ( 2) 00:10:40.017 30146.560 - 30265.716: 99.5327% ( 1) 00:10:40.017 34793.658 - 35031.971: 99.5546% ( 3) 00:10:40.017 35031.971 - 35270.284: 99.5911% ( 5) 00:10:40.017 35270.284 - 35508.596: 99.6203% ( 4) 00:10:40.017 35508.596 - 35746.909: 99.6568% ( 5) 00:10:40.017 35746.909 - 35985.222: 99.6933% ( 5) 00:10:40.017 35985.222 - 36223.535: 99.7298% ( 5) 00:10:40.017 36223.535 - 36461.847: 99.7664% ( 5) 00:10:40.017 36461.847 - 36700.160: 99.7956% ( 4) 00:10:40.017 36700.160 - 36938.473: 99.8248% ( 4) 00:10:40.017 36938.473 - 37176.785: 99.8613% ( 5) 00:10:40.017 37176.785 - 37415.098: 99.8978% ( 5) 00:10:40.017 37415.098 - 37653.411: 99.9270% ( 4) 00:10:40.017 37653.411 - 37891.724: 99.9635% ( 5) 00:10:40.017 37891.724 - 38130.036: 99.9927% ( 4) 00:10:40.017 38130.036 - 38368.349: 100.0000% ( 1) 00:10:40.017 00:10:40.017 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:40.017 ============================================================================== 00:10:40.017 Range in us Cumulative IO count 00:10:40.017 7745.164 - 7804.742: 0.0511% ( 7) 00:10:40.017 7804.742 - 7864.320: 0.2117% ( 22) 00:10:40.017 7864.320 - 7923.898: 0.5330% ( 44) 00:10:40.017 7923.898 - 7983.476: 1.2631% ( 100) 00:10:40.017 7983.476 - 8043.055: 2.6504% ( 190) 00:10:40.017 8043.055 - 8102.633: 4.7751% ( 291) 00:10:40.017 8102.633 - 8162.211: 7.5204% ( 376) 00:10:40.017 8162.211 - 8221.789: 10.8645% ( 458) 00:10:40.017 8221.789 - 8281.367: 14.5152% ( 500) 00:10:40.017 8281.367 - 8340.945: 18.3922% ( 531) 00:10:40.017 8340.945 - 8400.524: 22.4080% ( 550) 00:10:40.017 8400.524 - 8460.102: 26.3289% ( 537) 00:10:40.017 8460.102 - 8519.680: 30.3665% ( 553) 00:10:40.017 8519.680 - 8579.258: 34.5210% ( 569) 00:10:40.017 8579.258 - 8638.836: 38.6244% ( 562) 00:10:40.017 8638.836 - 8698.415: 42.8884% ( 584) 00:10:40.017 8698.415 - 8757.993: 47.1452% ( 583) 00:10:40.017 8757.993 - 8817.571: 51.3800% ( 580) 00:10:40.017 8817.571 - 8877.149: 55.4322% ( 555) 00:10:40.017 8877.149 - 8936.727: 59.1998% ( 516) 00:10:40.017 8936.727 - 8996.305: 62.5730% ( 462) 00:10:40.017 8996.305 - 9055.884: 65.4060% ( 388) 00:10:40.017 9055.884 - 9115.462: 67.8081% ( 329) 00:10:40.017 9115.462 - 9175.040: 69.7576% ( 267) 00:10:40.017 9175.040 - 9234.618: 71.4807% ( 236) 00:10:40.017 9234.618 - 9294.196: 72.9483% ( 201) 00:10:40.017 9294.196 - 9353.775: 74.1311% ( 162) 00:10:40.017 9353.775 - 9413.353: 75.3067% ( 161) 00:10:40.017 9413.353 - 9472.931: 76.5114% ( 165) 00:10:40.017 9472.931 - 9532.509: 77.7453% ( 169) 00:10:40.017 9532.509 - 9592.087: 78.9647% ( 167) 00:10:40.017 9592.087 - 9651.665: 80.1767% ( 166) 00:10:40.017 9651.665 - 9711.244: 81.4471% ( 174) 00:10:40.017 9711.244 - 9770.822: 82.6811% ( 169) 00:10:40.017 9770.822 - 9830.400: 83.9223% ( 170) 00:10:40.017 9830.400 - 9889.978: 85.0394% ( 153) 00:10:40.017 9889.978 - 9949.556: 86.0908% ( 144) 00:10:40.017 9949.556 - 10009.135: 87.0254% ( 128) 00:10:40.017 10009.135 - 10068.713: 87.9600% ( 128) 00:10:40.017 10068.713 - 10128.291: 88.8289% ( 119) 00:10:40.017 10128.291 - 10187.869: 89.6393% ( 111) 00:10:40.017 10187.869 - 10247.447: 90.4425% ( 110) 00:10:40.017 10247.447 - 10307.025: 91.2091% ( 105) 00:10:40.017 10307.025 - 10366.604: 91.9393% ( 100) 00:10:40.017 10366.604 - 10426.182: 92.5891% ( 89) 00:10:40.017 10426.182 - 10485.760: 93.1367% ( 75) 00:10:40.017 10485.760 - 10545.338: 93.6478% ( 70) 00:10:40.017 10545.338 - 10604.916: 94.1224% ( 65) 00:10:40.017 10604.916 - 10664.495: 94.5459% ( 58) 00:10:40.017 10664.495 - 10724.073: 94.8744% ( 45) 00:10:40.017 10724.073 - 10783.651: 95.1227% ( 34) 00:10:40.017 10783.651 - 10843.229: 95.3636% ( 33) 00:10:40.017 10843.229 - 10902.807: 95.5461% ( 25) 00:10:40.017 10902.807 - 10962.385: 95.6849% ( 19) 00:10:40.017 10962.385 - 11021.964: 95.8017% ( 16) 00:10:40.017 11021.964 - 11081.542: 95.8966% ( 13) 00:10:40.017 11081.542 - 11141.120: 95.9915% ( 13) 00:10:40.017 11141.120 - 11200.698: 96.0791% ( 12) 00:10:40.017 11200.698 - 11260.276: 96.1522% ( 10) 00:10:40.017 11260.276 - 11319.855: 96.2106% ( 8) 00:10:40.017 11319.855 - 11379.433: 96.2836% ( 10) 00:10:40.017 11379.433 - 11439.011: 96.3639% ( 11) 00:10:40.017 11439.011 - 11498.589: 96.4296% ( 9) 00:10:40.017 11498.589 - 11558.167: 96.5026% ( 10) 00:10:40.017 11558.167 - 11617.745: 96.5975% ( 13) 00:10:40.017 11617.745 - 11677.324: 96.7144% ( 16) 00:10:40.017 11677.324 - 11736.902: 96.7874% ( 10) 00:10:40.017 11736.902 - 11796.480: 96.8896% ( 14) 00:10:40.017 11796.480 - 11856.058: 96.9918% ( 14) 00:10:40.017 11856.058 - 11915.636: 97.0867% ( 13) 00:10:40.017 11915.636 - 11975.215: 97.1963% ( 15) 00:10:40.017 11975.215 - 12034.793: 97.3058% ( 15) 00:10:40.017 12034.793 - 12094.371: 97.4080% ( 14) 00:10:40.017 12094.371 - 12153.949: 97.5102% ( 14) 00:10:40.017 12153.949 - 12213.527: 97.6051% ( 13) 00:10:40.017 12213.527 - 12273.105: 97.7074% ( 14) 00:10:40.017 12273.105 - 12332.684: 97.8023% ( 13) 00:10:40.017 12332.684 - 12392.262: 97.9118% ( 15) 00:10:40.017 12392.262 - 12451.840: 97.9921% ( 11) 00:10:40.017 12451.840 - 12511.418: 98.0943% ( 14) 00:10:40.017 12511.418 - 12570.996: 98.2112% ( 16) 00:10:40.017 12570.996 - 12630.575: 98.3061% ( 13) 00:10:40.017 12630.575 - 12690.153: 98.4010% ( 13) 00:10:40.017 12690.153 - 12749.731: 98.4959% ( 13) 00:10:40.017 12749.731 - 12809.309: 98.5762% ( 11) 00:10:40.017 12809.309 - 12868.887: 98.6565% ( 11) 00:10:40.017 12868.887 - 12928.465: 98.7077% ( 7) 00:10:40.017 12928.465 - 12988.044: 98.7515% ( 6) 00:10:40.017 12988.044 - 13047.622: 98.7880% ( 5) 00:10:40.017 13047.622 - 13107.200: 98.8318% ( 6) 00:10:40.017 13107.200 - 13166.778: 98.8756% ( 6) 00:10:40.017 13166.778 - 13226.356: 98.9048% ( 4) 00:10:40.017 13226.356 - 13285.935: 98.9340% ( 4) 00:10:40.017 13285.935 - 13345.513: 98.9559% ( 3) 00:10:40.017 13345.513 - 13405.091: 98.9778% ( 3) 00:10:40.017 13405.091 - 13464.669: 98.9997% ( 3) 00:10:40.017 13464.669 - 13524.247: 99.0216% ( 3) 00:10:40.017 13524.247 - 13583.825: 99.0508% ( 4) 00:10:40.017 13583.825 - 13643.404: 99.0654% ( 2) 00:10:40.017 22401.396 - 22520.553: 99.0727% ( 1) 00:10:40.017 22520.553 - 22639.709: 99.0873% ( 2) 00:10:40.017 22639.709 - 22758.865: 99.1019% ( 2) 00:10:40.017 22758.865 - 22878.022: 99.1165% ( 2) 00:10:40.017 22878.022 - 22997.178: 99.1311% ( 2) 00:10:40.017 22997.178 - 23116.335: 99.1530% ( 3) 00:10:40.017 23116.335 - 23235.491: 99.1676% ( 2) 00:10:40.017 23235.491 - 23354.647: 99.1822% ( 2) 00:10:40.017 23354.647 - 23473.804: 99.1968% ( 2) 00:10:40.017 23473.804 - 23592.960: 99.2114% ( 2) 00:10:40.017 23592.960 - 23712.116: 99.2334% ( 3) 00:10:40.017 23712.116 - 23831.273: 99.2480% ( 2) 00:10:40.017 23831.273 - 23950.429: 99.2626% ( 2) 00:10:40.017 23950.429 - 24069.585: 99.2772% ( 2) 00:10:40.017 24069.585 - 24188.742: 99.2918% ( 2) 00:10:40.018 24188.742 - 24307.898: 99.3137% ( 3) 00:10:40.018 24307.898 - 24427.055: 99.3283% ( 2) 00:10:40.018 24427.055 - 24546.211: 99.3429% ( 2) 00:10:40.018 24546.211 - 24665.367: 99.3575% ( 2) 00:10:40.018 24665.367 - 24784.524: 99.3794% ( 3) 00:10:40.018 24784.524 - 24903.680: 99.3940% ( 2) 00:10:40.018 24903.680 - 25022.836: 99.4159% ( 3) 00:10:40.018 25022.836 - 25141.993: 99.4305% ( 2) 00:10:40.018 25141.993 - 25261.149: 99.4451% ( 2) 00:10:40.018 25261.149 - 25380.305: 99.4597% ( 2) 00:10:40.018 25380.305 - 25499.462: 99.4816% ( 3) 00:10:40.018 25499.462 - 25618.618: 99.4962% ( 2) 00:10:40.018 25618.618 - 25737.775: 99.5108% ( 2) 00:10:40.018 25737.775 - 25856.931: 99.5327% ( 3) 00:10:40.018 30504.029 - 30742.342: 99.5546% ( 3) 00:10:40.018 30742.342 - 30980.655: 99.5911% ( 5) 00:10:40.018 30980.655 - 31218.967: 99.6276% ( 5) 00:10:40.018 31218.967 - 31457.280: 99.6495% ( 3) 00:10:40.018 31457.280 - 31695.593: 99.6860% ( 5) 00:10:40.018 31695.593 - 31933.905: 99.7225% ( 5) 00:10:40.018 31933.905 - 32172.218: 99.7518% ( 4) 00:10:40.018 32172.218 - 32410.531: 99.7883% ( 5) 00:10:40.018 32410.531 - 32648.844: 99.8175% ( 4) 00:10:40.018 32648.844 - 32887.156: 99.8540% ( 5) 00:10:40.018 32887.156 - 33125.469: 99.8905% ( 5) 00:10:40.018 33125.469 - 33363.782: 99.9270% ( 5) 00:10:40.018 33363.782 - 33602.095: 99.9635% ( 5) 00:10:40.018 33602.095 - 33840.407: 99.9927% ( 4) 00:10:40.018 33840.407 - 34078.720: 100.0000% ( 1) 00:10:40.018 00:10:40.018 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:40.018 ============================================================================== 00:10:40.018 Range in us Cumulative IO count 00:10:40.018 7685.585 - 7745.164: 0.0073% ( 1) 00:10:40.018 7745.164 - 7804.742: 0.0511% ( 6) 00:10:40.018 7804.742 - 7864.320: 0.2044% ( 21) 00:10:40.018 7864.320 - 7923.898: 0.5841% ( 52) 00:10:40.018 7923.898 - 7983.476: 1.3508% ( 105) 00:10:40.018 7983.476 - 8043.055: 2.6285% ( 175) 00:10:40.018 8043.055 - 8102.633: 4.6948% ( 283) 00:10:40.018 8102.633 - 8162.211: 7.5935% ( 397) 00:10:40.018 8162.211 - 8221.789: 10.9156% ( 455) 00:10:40.018 8221.789 - 8281.367: 14.5590% ( 499) 00:10:40.018 8281.367 - 8340.945: 18.4068% ( 527) 00:10:40.018 8340.945 - 8400.524: 22.4299% ( 551) 00:10:40.018 8400.524 - 8460.102: 26.4530% ( 551) 00:10:40.018 8460.102 - 8519.680: 30.4761% ( 551) 00:10:40.018 8519.680 - 8579.258: 34.5721% ( 561) 00:10:40.018 8579.258 - 8638.836: 38.7120% ( 567) 00:10:40.018 8638.836 - 8698.415: 42.8811% ( 571) 00:10:40.018 8698.415 - 8757.993: 47.0210% ( 567) 00:10:40.018 8757.993 - 8817.571: 51.3216% ( 589) 00:10:40.018 8817.571 - 8877.149: 55.3811% ( 556) 00:10:40.018 8877.149 - 8936.727: 59.1998% ( 523) 00:10:40.018 8936.727 - 8996.305: 62.7117% ( 481) 00:10:40.018 8996.305 - 9055.884: 65.4717% ( 378) 00:10:40.018 9055.884 - 9115.462: 67.8373% ( 324) 00:10:40.018 9115.462 - 9175.040: 69.8306% ( 273) 00:10:40.018 9175.040 - 9234.618: 71.5318% ( 233) 00:10:40.018 9234.618 - 9294.196: 72.8680% ( 183) 00:10:40.018 9294.196 - 9353.775: 74.1311% ( 173) 00:10:40.018 9353.775 - 9413.353: 75.2629% ( 155) 00:10:40.018 9413.353 - 9472.931: 76.4238% ( 159) 00:10:40.018 9472.931 - 9532.509: 77.6504% ( 168) 00:10:40.018 9532.509 - 9592.087: 78.8332% ( 162) 00:10:40.018 9592.087 - 9651.665: 80.0599% ( 168) 00:10:40.018 9651.665 - 9711.244: 81.2865% ( 168) 00:10:40.018 9711.244 - 9770.822: 82.5423% ( 172) 00:10:40.018 9770.822 - 9830.400: 83.7690% ( 168) 00:10:40.018 9830.400 - 9889.978: 84.8788% ( 152) 00:10:40.018 9889.978 - 9949.556: 85.9594% ( 148) 00:10:40.018 9949.556 - 10009.135: 86.9378% ( 134) 00:10:40.018 10009.135 - 10068.713: 87.8505% ( 125) 00:10:40.018 10068.713 - 10128.291: 88.7485% ( 123) 00:10:40.018 10128.291 - 10187.869: 89.5663% ( 112) 00:10:40.018 10187.869 - 10247.447: 90.4133% ( 116) 00:10:40.018 10247.447 - 10307.025: 91.1580% ( 102) 00:10:40.018 10307.025 - 10366.604: 91.8370% ( 93) 00:10:40.018 10366.604 - 10426.182: 92.4723% ( 87) 00:10:40.018 10426.182 - 10485.760: 93.0637% ( 81) 00:10:40.018 10485.760 - 10545.338: 93.5967% ( 73) 00:10:40.018 10545.338 - 10604.916: 94.0932% ( 68) 00:10:40.018 10604.916 - 10664.495: 94.5239% ( 59) 00:10:40.018 10664.495 - 10724.073: 94.8598% ( 46) 00:10:40.018 10724.073 - 10783.651: 95.1592% ( 41) 00:10:40.018 10783.651 - 10843.229: 95.3928% ( 32) 00:10:40.018 10843.229 - 10902.807: 95.5900% ( 27) 00:10:40.018 10902.807 - 10962.385: 95.7652% ( 24) 00:10:40.018 10962.385 - 11021.964: 95.9112% ( 20) 00:10:40.018 11021.964 - 11081.542: 96.0207% ( 15) 00:10:40.018 11081.542 - 11141.120: 96.1084% ( 12) 00:10:40.018 11141.120 - 11200.698: 96.2179% ( 15) 00:10:40.018 11200.698 - 11260.276: 96.3274% ( 15) 00:10:40.018 11260.276 - 11319.855: 96.4150% ( 12) 00:10:40.018 11319.855 - 11379.433: 96.4880% ( 10) 00:10:40.018 11379.433 - 11439.011: 96.5464% ( 8) 00:10:40.018 11439.011 - 11498.589: 96.5975% ( 7) 00:10:40.018 11498.589 - 11558.167: 96.6560% ( 8) 00:10:40.018 11558.167 - 11617.745: 96.7071% ( 7) 00:10:40.018 11617.745 - 11677.324: 96.7509% ( 6) 00:10:40.018 11677.324 - 11736.902: 96.8166% ( 9) 00:10:40.018 11736.902 - 11796.480: 96.8896% ( 10) 00:10:40.018 11796.480 - 11856.058: 96.9553% ( 9) 00:10:40.018 11856.058 - 11915.636: 97.0429% ( 12) 00:10:40.018 11915.636 - 11975.215: 97.1452% ( 14) 00:10:40.018 11975.215 - 12034.793: 97.2766% ( 18) 00:10:40.018 12034.793 - 12094.371: 97.3861% ( 15) 00:10:40.018 12094.371 - 12153.949: 97.4883% ( 14) 00:10:40.018 12153.949 - 12213.527: 97.5905% ( 14) 00:10:40.018 12213.527 - 12273.105: 97.6928% ( 14) 00:10:40.018 12273.105 - 12332.684: 97.7950% ( 14) 00:10:40.018 12332.684 - 12392.262: 97.8753% ( 11) 00:10:40.018 12392.262 - 12451.840: 97.9775% ( 14) 00:10:40.018 12451.840 - 12511.418: 98.0724% ( 13) 00:10:40.018 12511.418 - 12570.996: 98.1527% ( 11) 00:10:40.018 12570.996 - 12630.575: 98.2185% ( 9) 00:10:40.018 12630.575 - 12690.153: 98.2915% ( 10) 00:10:40.018 12690.153 - 12749.731: 98.3645% ( 10) 00:10:40.018 12749.731 - 12809.309: 98.4521% ( 12) 00:10:40.018 12809.309 - 12868.887: 98.5178% ( 9) 00:10:40.018 12868.887 - 12928.465: 98.5981% ( 11) 00:10:40.018 12928.465 - 12988.044: 98.6638% ( 9) 00:10:40.018 12988.044 - 13047.622: 98.7296% ( 9) 00:10:40.018 13047.622 - 13107.200: 98.7807% ( 7) 00:10:40.018 13107.200 - 13166.778: 98.8318% ( 7) 00:10:40.018 13166.778 - 13226.356: 98.8829% ( 7) 00:10:40.018 13226.356 - 13285.935: 98.9340% ( 7) 00:10:40.018 13285.935 - 13345.513: 98.9632% ( 4) 00:10:40.018 13345.513 - 13405.091: 98.9997% ( 5) 00:10:40.018 13405.091 - 13464.669: 99.0216% ( 3) 00:10:40.018 13464.669 - 13524.247: 99.0435% ( 3) 00:10:40.019 13524.247 - 13583.825: 99.0581% ( 2) 00:10:40.019 13583.825 - 13643.404: 99.0654% ( 1) 00:10:40.019 18111.767 - 18230.924: 99.0727% ( 1) 00:10:40.019 18230.924 - 18350.080: 99.0873% ( 2) 00:10:40.019 18350.080 - 18469.236: 99.1092% ( 3) 00:10:40.019 18469.236 - 18588.393: 99.1238% ( 2) 00:10:40.019 18588.393 - 18707.549: 99.1384% ( 2) 00:10:40.019 18707.549 - 18826.705: 99.1603% ( 3) 00:10:40.019 18826.705 - 18945.862: 99.1749% ( 2) 00:10:40.019 18945.862 - 19065.018: 99.1968% ( 3) 00:10:40.019 19065.018 - 19184.175: 99.2114% ( 2) 00:10:40.019 19184.175 - 19303.331: 99.2334% ( 3) 00:10:40.019 19303.331 - 19422.487: 99.2480% ( 2) 00:10:40.019 19422.487 - 19541.644: 99.2626% ( 2) 00:10:40.019 19541.644 - 19660.800: 99.2845% ( 3) 00:10:40.019 19660.800 - 19779.956: 99.2991% ( 2) 00:10:40.019 19779.956 - 19899.113: 99.3137% ( 2) 00:10:40.019 19899.113 - 20018.269: 99.3283% ( 2) 00:10:40.019 20018.269 - 20137.425: 99.3502% ( 3) 00:10:40.019 20137.425 - 20256.582: 99.3648% ( 2) 00:10:40.019 20256.582 - 20375.738: 99.3867% ( 3) 00:10:40.019 20375.738 - 20494.895: 99.4013% ( 2) 00:10:40.019 20494.895 - 20614.051: 99.4232% ( 3) 00:10:40.019 20614.051 - 20733.207: 99.4378% ( 2) 00:10:40.019 20733.207 - 20852.364: 99.4524% ( 2) 00:10:40.019 20852.364 - 20971.520: 99.4670% ( 2) 00:10:40.019 20971.520 - 21090.676: 99.4889% ( 3) 00:10:40.019 21090.676 - 21209.833: 99.5035% ( 2) 00:10:40.019 21209.833 - 21328.989: 99.5181% ( 2) 00:10:40.019 21328.989 - 21448.145: 99.5327% ( 2) 00:10:40.019 26095.244 - 26214.400: 99.5473% ( 2) 00:10:40.019 26214.400 - 26333.556: 99.5619% ( 2) 00:10:40.019 26333.556 - 26452.713: 99.5838% ( 3) 00:10:40.019 26452.713 - 26571.869: 99.5984% ( 2) 00:10:40.019 26571.869 - 26691.025: 99.6130% ( 2) 00:10:40.019 26691.025 - 26810.182: 99.6276% ( 2) 00:10:40.019 26810.182 - 26929.338: 99.6495% ( 3) 00:10:40.019 26929.338 - 27048.495: 99.6641% ( 2) 00:10:40.019 27048.495 - 27167.651: 99.6787% ( 2) 00:10:40.019 27167.651 - 27286.807: 99.7006% ( 3) 00:10:40.019 27286.807 - 27405.964: 99.7152% ( 2) 00:10:40.019 27405.964 - 27525.120: 99.7298% ( 2) 00:10:40.019 27525.120 - 27644.276: 99.7518% ( 3) 00:10:40.019 27644.276 - 27763.433: 99.7664% ( 2) 00:10:40.019 27763.433 - 27882.589: 99.7883% ( 3) 00:10:40.019 27882.589 - 28001.745: 99.8029% ( 2) 00:10:40.019 28001.745 - 28120.902: 99.8175% ( 2) 00:10:40.019 28120.902 - 28240.058: 99.8321% ( 2) 00:10:40.019 28240.058 - 28359.215: 99.8540% ( 3) 00:10:40.019 28359.215 - 28478.371: 99.8613% ( 1) 00:10:40.019 28478.371 - 28597.527: 99.8832% ( 3) 00:10:40.019 28597.527 - 28716.684: 99.8978% ( 2) 00:10:40.019 28716.684 - 28835.840: 99.9124% ( 2) 00:10:40.019 28835.840 - 28954.996: 99.9343% ( 3) 00:10:40.019 28954.996 - 29074.153: 99.9489% ( 2) 00:10:40.019 29074.153 - 29193.309: 99.9635% ( 2) 00:10:40.019 29193.309 - 29312.465: 99.9854% ( 3) 00:10:40.019 29312.465 - 29431.622: 100.0000% ( 2) 00:10:40.019 00:10:40.019 09:59:29 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:41.399 Initializing NVMe Controllers 00:10:41.399 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:41.399 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:41.399 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:41.399 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:41.399 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:41.399 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:41.399 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:41.399 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:41.399 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:41.399 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:41.399 Initialization complete. Launching workers. 00:10:41.399 ======================================================== 00:10:41.399 Latency(us) 00:10:41.399 Device Information : IOPS MiB/s Average min max 00:10:41.399 PCIE (0000:00:10.0) NSID 1 from core 0: 11021.10 129.15 11638.24 8335.33 41939.27 00:10:41.399 PCIE (0000:00:11.0) NSID 1 from core 0: 11021.10 129.15 11612.63 8428.44 39260.81 00:10:41.399 PCIE (0000:00:13.0) NSID 1 from core 0: 11021.10 129.15 11586.46 8442.57 37189.39 00:10:41.399 PCIE (0000:00:12.0) NSID 1 from core 0: 11021.10 129.15 11559.81 8472.95 34519.33 00:10:41.399 PCIE (0000:00:12.0) NSID 2 from core 0: 11021.10 129.15 11532.50 8423.78 31759.57 00:10:41.399 PCIE (0000:00:12.0) NSID 3 from core 0: 11021.10 129.15 11506.02 8504.98 29054.49 00:10:41.399 ======================================================== 00:10:41.399 Total : 66126.63 774.92 11572.61 8335.33 41939.27 00:10:41.399 00:10:41.399 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:41.399 ================================================================================= 00:10:41.399 1.00000% : 8579.258us 00:10:41.399 10.00000% : 9294.196us 00:10:41.399 25.00000% : 9830.400us 00:10:41.399 50.00000% : 10604.916us 00:10:41.399 75.00000% : 12451.840us 00:10:41.399 90.00000% : 15371.171us 00:10:41.399 95.00000% : 16801.047us 00:10:41.399 98.00000% : 18111.767us 00:10:41.399 99.00000% : 31218.967us 00:10:41.399 99.50000% : 39798.225us 00:10:41.399 99.90000% : 41704.727us 00:10:41.399 99.99000% : 41943.040us 00:10:41.399 99.99900% : 41943.040us 00:10:41.399 99.99990% : 41943.040us 00:10:41.399 99.99999% : 41943.040us 00:10:41.399 00:10:41.399 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:41.399 ================================================================================= 00:10:41.399 1.00000% : 8698.415us 00:10:41.399 10.00000% : 9353.775us 00:10:41.399 25.00000% : 9830.400us 00:10:41.399 50.00000% : 10545.338us 00:10:41.399 75.00000% : 12451.840us 00:10:41.399 90.00000% : 15371.171us 00:10:41.399 95.00000% : 16681.891us 00:10:41.399 98.00000% : 17992.611us 00:10:41.399 99.00000% : 29550.778us 00:10:41.399 99.50000% : 37415.098us 00:10:41.399 99.90000% : 39083.287us 00:10:41.399 99.99000% : 39321.600us 00:10:41.399 99.99900% : 39321.600us 00:10:41.399 99.99990% : 39321.600us 00:10:41.399 99.99999% : 39321.600us 00:10:41.399 00:10:41.399 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:41.399 ================================================================================= 00:10:41.399 1.00000% : 8698.415us 00:10:41.399 10.00000% : 9294.196us 00:10:41.399 25.00000% : 9830.400us 00:10:41.399 50.00000% : 10604.916us 00:10:41.399 75.00000% : 12451.840us 00:10:41.399 90.00000% : 15371.171us 00:10:41.399 95.00000% : 16801.047us 00:10:41.399 98.00000% : 18111.767us 00:10:41.399 99.00000% : 27525.120us 00:10:41.399 99.50000% : 35270.284us 00:10:41.399 99.90000% : 36938.473us 00:10:41.399 99.99000% : 37176.785us 00:10:41.399 99.99900% : 37415.098us 00:10:41.399 99.99990% : 37415.098us 00:10:41.399 99.99999% : 37415.098us 00:10:41.399 00:10:41.399 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:41.399 ================================================================================= 00:10:41.399 1.00000% : 8757.993us 00:10:41.399 10.00000% : 9294.196us 00:10:41.399 25.00000% : 9830.400us 00:10:41.399 50.00000% : 10604.916us 00:10:41.399 75.00000% : 12451.840us 00:10:41.399 90.00000% : 15252.015us 00:10:41.399 95.00000% : 16681.891us 00:10:41.399 98.00000% : 18230.924us 00:10:41.399 99.00000% : 25261.149us 00:10:41.399 99.50000% : 32648.844us 00:10:41.399 99.90000% : 34317.033us 00:10:41.399 99.99000% : 34555.345us 00:10:41.399 99.99900% : 34555.345us 00:10:41.399 99.99990% : 34555.345us 00:10:41.399 99.99999% : 34555.345us 00:10:41.399 00:10:41.399 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:41.399 ================================================================================= 00:10:41.399 1.00000% : 8757.993us 00:10:41.399 10.00000% : 9353.775us 00:10:41.399 25.00000% : 9830.400us 00:10:41.399 50.00000% : 10545.338us 00:10:41.399 75.00000% : 12451.840us 00:10:41.399 90.00000% : 15252.015us 00:10:41.399 95.00000% : 16801.047us 00:10:41.399 98.00000% : 18230.924us 00:10:41.399 99.00000% : 23354.647us 00:10:41.399 99.50000% : 28240.058us 00:10:41.399 99.90000% : 31457.280us 00:10:41.399 99.99000% : 31933.905us 00:10:41.399 99.99900% : 31933.905us 00:10:41.399 99.99990% : 31933.905us 00:10:41.399 99.99999% : 31933.905us 00:10:41.399 00:10:41.399 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:41.399 ================================================================================= 00:10:41.399 1.00000% : 8698.415us 00:10:41.399 10.00000% : 9353.775us 00:10:41.399 25.00000% : 9830.400us 00:10:41.399 50.00000% : 10545.338us 00:10:41.399 75.00000% : 12570.996us 00:10:41.399 90.00000% : 15371.171us 00:10:41.399 95.00000% : 16681.891us 00:10:41.399 98.00000% : 17873.455us 00:10:41.399 99.00000% : 20852.364us 00:10:41.399 99.50000% : 27167.651us 00:10:41.399 99.90000% : 28716.684us 00:10:41.399 99.99000% : 29074.153us 00:10:41.399 99.99900% : 29074.153us 00:10:41.399 99.99990% : 29074.153us 00:10:41.399 99.99999% : 29074.153us 00:10:41.399 00:10:41.399 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:41.399 ============================================================================== 00:10:41.399 Range in us Cumulative IO count 00:10:41.399 8281.367 - 8340.945: 0.0181% ( 2) 00:10:41.399 8340.945 - 8400.524: 0.2348% ( 24) 00:10:41.399 8400.524 - 8460.102: 0.4335% ( 22) 00:10:41.399 8460.102 - 8519.680: 0.9032% ( 52) 00:10:41.399 8519.680 - 8579.258: 1.4541% ( 61) 00:10:41.400 8579.258 - 8638.836: 1.9238% ( 52) 00:10:41.400 8638.836 - 8698.415: 2.6102% ( 76) 00:10:41.400 8698.415 - 8757.993: 3.1521% ( 60) 00:10:41.400 8757.993 - 8817.571: 3.8475% ( 77) 00:10:41.400 8817.571 - 8877.149: 4.4256% ( 64) 00:10:41.400 8877.149 - 8936.727: 5.1120% ( 76) 00:10:41.400 8936.727 - 8996.305: 5.8074% ( 77) 00:10:41.400 8996.305 - 9055.884: 6.5661% ( 84) 00:10:41.400 9055.884 - 9115.462: 7.3609% ( 88) 00:10:41.400 9115.462 - 9175.040: 8.3002% ( 104) 00:10:41.400 9175.040 - 9234.618: 9.4111% ( 123) 00:10:41.400 9234.618 - 9294.196: 10.7027% ( 143) 00:10:41.400 9294.196 - 9353.775: 11.7955% ( 121) 00:10:41.400 9353.775 - 9413.353: 13.2496% ( 161) 00:10:41.400 9413.353 - 9472.931: 14.9296% ( 186) 00:10:41.400 9472.931 - 9532.509: 16.7901% ( 206) 00:10:41.400 9532.509 - 9592.087: 18.6236% ( 203) 00:10:41.400 9592.087 - 9651.665: 20.5473% ( 213) 00:10:41.400 9651.665 - 9711.244: 22.4621% ( 212) 00:10:41.400 9711.244 - 9770.822: 24.3226% ( 206) 00:10:41.400 9770.822 - 9830.400: 26.4270% ( 233) 00:10:41.400 9830.400 - 9889.978: 28.2876% ( 206) 00:10:41.400 9889.978 - 9949.556: 30.4100% ( 235) 00:10:41.400 9949.556 - 10009.135: 32.4241% ( 223) 00:10:41.400 10009.135 - 10068.713: 34.4111% ( 220) 00:10:41.400 10068.713 - 10128.291: 36.2717% ( 206) 00:10:41.400 10128.291 - 10187.869: 38.1864% ( 212) 00:10:41.400 10187.869 - 10247.447: 39.9476% ( 195) 00:10:41.400 10247.447 - 10307.025: 41.8714% ( 213) 00:10:41.400 10307.025 - 10366.604: 43.7861% ( 212) 00:10:41.400 10366.604 - 10426.182: 45.6647% ( 208) 00:10:41.400 10426.182 - 10485.760: 47.3085% ( 182) 00:10:41.400 10485.760 - 10545.338: 49.0426% ( 192) 00:10:41.400 10545.338 - 10604.916: 50.6232% ( 175) 00:10:41.400 10604.916 - 10664.495: 52.2309% ( 178) 00:10:41.400 10664.495 - 10724.073: 53.7934% ( 173) 00:10:41.400 10724.073 - 10783.651: 55.2655% ( 163) 00:10:41.400 10783.651 - 10843.229: 56.5390% ( 141) 00:10:41.400 10843.229 - 10902.807: 57.5777% ( 115) 00:10:41.400 10902.807 - 10962.385: 58.6163% ( 115) 00:10:41.400 10962.385 - 11021.964: 59.5918% ( 108) 00:10:41.400 11021.964 - 11081.542: 60.6395% ( 116) 00:10:41.400 11081.542 - 11141.120: 61.5426% ( 100) 00:10:41.400 11141.120 - 11200.698: 62.3916% ( 94) 00:10:41.400 11200.698 - 11260.276: 63.3038% ( 101) 00:10:41.400 11260.276 - 11319.855: 63.9812% ( 75) 00:10:41.400 11319.855 - 11379.433: 64.6134% ( 70) 00:10:41.400 11379.433 - 11439.011: 65.2728% ( 73) 00:10:41.400 11439.011 - 11498.589: 65.9140% ( 71) 00:10:41.400 11498.589 - 11558.167: 66.4830% ( 63) 00:10:41.400 11558.167 - 11617.745: 67.0701% ( 65) 00:10:41.400 11617.745 - 11677.324: 67.5849% ( 57) 00:10:41.400 11677.324 - 11736.902: 68.0726% ( 54) 00:10:41.400 11736.902 - 11796.480: 68.5965% ( 58) 00:10:41.400 11796.480 - 11856.058: 69.1745% ( 64) 00:10:41.400 11856.058 - 11915.636: 69.7706% ( 66) 00:10:41.400 11915.636 - 11975.215: 70.5202% ( 83) 00:10:41.400 11975.215 - 12034.793: 71.2970% ( 86) 00:10:41.400 12034.793 - 12094.371: 71.9021% ( 67) 00:10:41.400 12094.371 - 12153.949: 72.6788% ( 86) 00:10:41.400 12153.949 - 12213.527: 73.1936% ( 57) 00:10:41.400 12213.527 - 12273.105: 73.8530% ( 73) 00:10:41.400 12273.105 - 12332.684: 74.4039% ( 61) 00:10:41.400 12332.684 - 12392.262: 74.9729% ( 63) 00:10:41.400 12392.262 - 12451.840: 75.4516% ( 53) 00:10:41.400 12451.840 - 12511.418: 75.9393% ( 54) 00:10:41.400 12511.418 - 12570.996: 76.3006% ( 40) 00:10:41.400 12570.996 - 12630.575: 76.5896% ( 32) 00:10:41.400 12630.575 - 12690.153: 76.9418% ( 39) 00:10:41.400 12690.153 - 12749.731: 77.2399% ( 33) 00:10:41.400 12749.731 - 12809.309: 77.6102% ( 41) 00:10:41.400 12809.309 - 12868.887: 78.0257% ( 46) 00:10:41.400 12868.887 - 12928.465: 78.4050% ( 42) 00:10:41.400 12928.465 - 12988.044: 78.7753% ( 41) 00:10:41.400 12988.044 - 13047.622: 79.1366% ( 40) 00:10:41.400 13047.622 - 13107.200: 79.4978% ( 40) 00:10:41.400 13107.200 - 13166.778: 79.8772% ( 42) 00:10:41.400 13166.778 - 13226.356: 80.2384% ( 40) 00:10:41.400 13226.356 - 13285.935: 80.6810% ( 49) 00:10:41.400 13285.935 - 13345.513: 80.9971% ( 35) 00:10:41.400 13345.513 - 13405.091: 81.4487% ( 50) 00:10:41.400 13405.091 - 13464.669: 81.7919% ( 38) 00:10:41.400 13464.669 - 13524.247: 82.1803% ( 43) 00:10:41.400 13524.247 - 13583.825: 82.5867% ( 45) 00:10:41.400 13583.825 - 13643.404: 82.9028% ( 35) 00:10:41.400 13643.404 - 13702.982: 83.2280% ( 36) 00:10:41.400 13702.982 - 13762.560: 83.5802% ( 39) 00:10:41.400 13762.560 - 13822.138: 84.0137% ( 48) 00:10:41.400 13822.138 - 13881.716: 84.2937% ( 31) 00:10:41.400 13881.716 - 13941.295: 84.6189% ( 36) 00:10:41.400 13941.295 - 14000.873: 84.8808% ( 29) 00:10:41.400 14000.873 - 14060.451: 85.2150% ( 37) 00:10:41.400 14060.451 - 14120.029: 85.5401% ( 36) 00:10:41.400 14120.029 - 14179.607: 85.8201% ( 31) 00:10:41.400 14179.607 - 14239.185: 86.1994% ( 42) 00:10:41.400 14239.185 - 14298.764: 86.5065% ( 34) 00:10:41.400 14298.764 - 14358.342: 86.7684% ( 29) 00:10:41.400 14358.342 - 14417.920: 87.0755% ( 34) 00:10:41.400 14417.920 - 14477.498: 87.2832% ( 23) 00:10:41.400 14477.498 - 14537.076: 87.4819% ( 22) 00:10:41.400 14537.076 - 14596.655: 87.6716% ( 21) 00:10:41.400 14596.655 - 14656.233: 87.9155% ( 27) 00:10:41.400 14656.233 - 14715.811: 88.1142% ( 22) 00:10:41.400 14715.811 - 14775.389: 88.2948% ( 20) 00:10:41.400 14775.389 - 14834.967: 88.4032% ( 12) 00:10:41.400 14834.967 - 14894.545: 88.6741% ( 30) 00:10:41.400 14894.545 - 14954.124: 88.8728% ( 22) 00:10:41.400 14954.124 - 15013.702: 89.1077% ( 26) 00:10:41.400 15013.702 - 15073.280: 89.3335% ( 25) 00:10:41.400 15073.280 - 15132.858: 89.4960% ( 18) 00:10:41.400 15132.858 - 15192.436: 89.7399% ( 27) 00:10:41.400 15192.436 - 15252.015: 89.9025% ( 18) 00:10:41.400 15252.015 - 15371.171: 90.2637% ( 40) 00:10:41.400 15371.171 - 15490.327: 90.6160% ( 39) 00:10:41.400 15490.327 - 15609.484: 90.9953% ( 42) 00:10:41.400 15609.484 - 15728.640: 91.4740% ( 53) 00:10:41.400 15728.640 - 15847.796: 91.8895% ( 46) 00:10:41.400 15847.796 - 15966.953: 92.3591% ( 52) 00:10:41.400 15966.953 - 16086.109: 92.9191% ( 62) 00:10:41.400 16086.109 - 16205.265: 93.3074% ( 43) 00:10:41.400 16205.265 - 16324.422: 93.6777% ( 41) 00:10:41.400 16324.422 - 16443.578: 94.0571% ( 42) 00:10:41.400 16443.578 - 16562.735: 94.5900% ( 59) 00:10:41.400 16562.735 - 16681.891: 94.8519% ( 29) 00:10:41.400 16681.891 - 16801.047: 95.1319% ( 31) 00:10:41.400 16801.047 - 16920.204: 95.4570% ( 36) 00:10:41.400 16920.204 - 17039.360: 95.7280% ( 30) 00:10:41.400 17039.360 - 17158.516: 96.0350% ( 34) 00:10:41.400 17158.516 - 17277.673: 96.3421% ( 34) 00:10:41.400 17277.673 - 17396.829: 96.6402% ( 33) 00:10:41.400 17396.829 - 17515.985: 96.8840% ( 27) 00:10:41.400 17515.985 - 17635.142: 97.1189% ( 26) 00:10:41.400 17635.142 - 17754.298: 97.4079% ( 32) 00:10:41.400 17754.298 - 17873.455: 97.5975% ( 21) 00:10:41.400 17873.455 - 17992.611: 97.8053% ( 23) 00:10:41.400 17992.611 - 18111.767: 98.0130% ( 23) 00:10:41.400 18111.767 - 18230.924: 98.1846% ( 19) 00:10:41.400 18230.924 - 18350.080: 98.3833% ( 22) 00:10:41.400 18350.080 - 18469.236: 98.5639% ( 20) 00:10:41.400 18469.236 - 18588.393: 98.7085% ( 16) 00:10:41.400 18588.393 - 18707.549: 98.7988% ( 10) 00:10:41.400 18707.549 - 18826.705: 98.8349% ( 4) 00:10:41.400 18826.705 - 18945.862: 98.8439% ( 1) 00:10:41.400 30146.560 - 30265.716: 98.8530% ( 1) 00:10:41.400 30265.716 - 30384.873: 98.8891% ( 4) 00:10:41.400 30384.873 - 30504.029: 98.8981% ( 1) 00:10:41.400 30504.029 - 30742.342: 98.9523% ( 6) 00:10:41.400 30742.342 - 30980.655: 98.9975% ( 5) 00:10:41.400 30980.655 - 31218.967: 99.0336% ( 4) 00:10:41.400 31218.967 - 31457.280: 99.0878% ( 6) 00:10:41.400 31457.280 - 31695.593: 99.1239% ( 4) 00:10:41.400 31695.593 - 31933.905: 99.1691% ( 5) 00:10:41.400 31933.905 - 32172.218: 99.2233% ( 6) 00:10:41.400 32172.218 - 32410.531: 99.2504% ( 3) 00:10:41.400 32410.531 - 32648.844: 99.3046% ( 6) 00:10:41.400 32648.844 - 32887.156: 99.3497% ( 5) 00:10:41.400 32887.156 - 33125.469: 99.3949% ( 5) 00:10:41.400 33125.469 - 33363.782: 99.4220% ( 3) 00:10:41.400 39321.600 - 39559.913: 99.4581% ( 4) 00:10:41.400 39559.913 - 39798.225: 99.5213% ( 7) 00:10:41.400 39798.225 - 40036.538: 99.5665% ( 5) 00:10:41.400 40036.538 - 40274.851: 99.5936% ( 3) 00:10:41.400 40274.851 - 40513.164: 99.6658% ( 8) 00:10:41.400 40513.164 - 40751.476: 99.7110% ( 5) 00:10:41.400 40751.476 - 40989.789: 99.7742% ( 7) 00:10:41.400 40989.789 - 41228.102: 99.8194% ( 5) 00:10:41.400 41228.102 - 41466.415: 99.8826% ( 7) 00:10:41.400 41466.415 - 41704.727: 99.9458% ( 7) 00:10:41.400 41704.727 - 41943.040: 100.0000% ( 6) 00:10:41.400 00:10:41.400 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:41.400 ============================================================================== 00:10:41.400 Range in us Cumulative IO count 00:10:41.400 8400.524 - 8460.102: 0.0361% ( 4) 00:10:41.400 8460.102 - 8519.680: 0.1535% ( 13) 00:10:41.400 8519.680 - 8579.258: 0.3161% ( 18) 00:10:41.400 8579.258 - 8638.836: 0.5871% ( 30) 00:10:41.400 8638.836 - 8698.415: 1.1470% ( 62) 00:10:41.400 8698.415 - 8757.993: 1.8515% ( 78) 00:10:41.400 8757.993 - 8817.571: 2.6644% ( 90) 00:10:41.400 8817.571 - 8877.149: 3.5314% ( 96) 00:10:41.400 8877.149 - 8936.727: 4.3353% ( 89) 00:10:41.400 8936.727 - 8996.305: 5.2294% ( 99) 00:10:41.400 8996.305 - 9055.884: 6.0784% ( 94) 00:10:41.400 9055.884 - 9115.462: 6.9364% ( 95) 00:10:41.400 9115.462 - 9175.040: 7.7944% ( 95) 00:10:41.400 9175.040 - 9234.618: 8.7789% ( 109) 00:10:41.400 9234.618 - 9294.196: 9.9801% ( 133) 00:10:41.401 9294.196 - 9353.775: 11.1543% ( 130) 00:10:41.401 9353.775 - 9413.353: 12.3555% ( 133) 00:10:41.401 9413.353 - 9472.931: 13.8457% ( 165) 00:10:41.401 9472.931 - 9532.509: 15.5257% ( 186) 00:10:41.401 9532.509 - 9592.087: 17.2598% ( 192) 00:10:41.401 9592.087 - 9651.665: 19.1835% ( 213) 00:10:41.401 9651.665 - 9711.244: 21.4776% ( 254) 00:10:41.401 9711.244 - 9770.822: 23.7175% ( 248) 00:10:41.401 9770.822 - 9830.400: 25.8580% ( 237) 00:10:41.401 9830.400 - 9889.978: 28.0076% ( 238) 00:10:41.401 9889.978 - 9949.556: 30.1210% ( 234) 00:10:41.401 9949.556 - 10009.135: 32.1532% ( 225) 00:10:41.401 10009.135 - 10068.713: 34.3208% ( 240) 00:10:41.401 10068.713 - 10128.291: 36.4613% ( 237) 00:10:41.401 10128.291 - 10187.869: 38.6290% ( 240) 00:10:41.401 10187.869 - 10247.447: 40.8418% ( 245) 00:10:41.401 10247.447 - 10307.025: 43.2081% ( 262) 00:10:41.401 10307.025 - 10366.604: 45.4841% ( 252) 00:10:41.401 10366.604 - 10426.182: 47.4169% ( 214) 00:10:41.401 10426.182 - 10485.760: 49.2052% ( 198) 00:10:41.401 10485.760 - 10545.338: 50.9303% ( 191) 00:10:41.401 10545.338 - 10604.916: 52.5921% ( 184) 00:10:41.401 10604.916 - 10664.495: 54.0101% ( 157) 00:10:41.401 10664.495 - 10724.073: 55.2926% ( 142) 00:10:41.401 10724.073 - 10783.651: 56.3223% ( 114) 00:10:41.401 10783.651 - 10843.229: 57.4332% ( 123) 00:10:41.401 10843.229 - 10902.807: 58.5983% ( 129) 00:10:41.401 10902.807 - 10962.385: 59.8627% ( 140) 00:10:41.401 10962.385 - 11021.964: 60.7840% ( 102) 00:10:41.401 11021.964 - 11081.542: 61.6149% ( 92) 00:10:41.401 11081.542 - 11141.120: 62.3645% ( 83) 00:10:41.401 11141.120 - 11200.698: 63.0238% ( 73) 00:10:41.401 11200.698 - 11260.276: 63.5928% ( 63) 00:10:41.401 11260.276 - 11319.855: 64.0715% ( 53) 00:10:41.401 11319.855 - 11379.433: 64.6044% ( 59) 00:10:41.401 11379.433 - 11439.011: 65.2186% ( 68) 00:10:41.401 11439.011 - 11498.589: 65.8237% ( 67) 00:10:41.401 11498.589 - 11558.167: 66.4017% ( 64) 00:10:41.401 11558.167 - 11617.745: 66.9617% ( 62) 00:10:41.401 11617.745 - 11677.324: 67.4855% ( 58) 00:10:41.401 11677.324 - 11736.902: 68.0004% ( 57) 00:10:41.401 11736.902 - 11796.480: 68.4339% ( 48) 00:10:41.401 11796.480 - 11856.058: 68.8855% ( 50) 00:10:41.401 11856.058 - 11915.636: 69.3732% ( 54) 00:10:41.401 11915.636 - 11975.215: 69.9061% ( 59) 00:10:41.401 11975.215 - 12034.793: 70.4480% ( 60) 00:10:41.401 12034.793 - 12094.371: 71.0079% ( 62) 00:10:41.401 12094.371 - 12153.949: 71.4505% ( 49) 00:10:41.401 12153.949 - 12213.527: 72.1369% ( 76) 00:10:41.401 12213.527 - 12273.105: 72.8866% ( 83) 00:10:41.401 12273.105 - 12332.684: 73.6091% ( 80) 00:10:41.401 12332.684 - 12392.262: 74.3858% ( 86) 00:10:41.401 12392.262 - 12451.840: 75.1174% ( 81) 00:10:41.401 12451.840 - 12511.418: 75.7406% ( 69) 00:10:41.401 12511.418 - 12570.996: 76.3277% ( 65) 00:10:41.401 12570.996 - 12630.575: 76.8967% ( 63) 00:10:41.401 12630.575 - 12690.153: 77.4025% ( 56) 00:10:41.401 12690.153 - 12749.731: 77.9263% ( 58) 00:10:41.401 12749.731 - 12809.309: 78.3508% ( 47) 00:10:41.401 12809.309 - 12868.887: 78.7121% ( 40) 00:10:41.401 12868.887 - 12928.465: 79.0553% ( 38) 00:10:41.401 12928.465 - 12988.044: 79.4346% ( 42) 00:10:41.401 12988.044 - 13047.622: 79.8320% ( 44) 00:10:41.401 13047.622 - 13107.200: 80.1752% ( 38) 00:10:41.401 13107.200 - 13166.778: 80.5365% ( 40) 00:10:41.401 13166.778 - 13226.356: 80.8436% ( 34) 00:10:41.401 13226.356 - 13285.935: 81.1868% ( 38) 00:10:41.401 13285.935 - 13345.513: 81.5210% ( 37) 00:10:41.401 13345.513 - 13405.091: 81.8642% ( 38) 00:10:41.401 13405.091 - 13464.669: 82.2887% ( 47) 00:10:41.401 13464.669 - 13524.247: 82.7583% ( 52) 00:10:41.401 13524.247 - 13583.825: 83.2551% ( 55) 00:10:41.401 13583.825 - 13643.404: 83.6525% ( 44) 00:10:41.401 13643.404 - 13702.982: 84.0950% ( 49) 00:10:41.401 13702.982 - 13762.560: 84.4382% ( 38) 00:10:41.401 13762.560 - 13822.138: 84.7814% ( 38) 00:10:41.401 13822.138 - 13881.716: 84.9982% ( 24) 00:10:41.401 13881.716 - 13941.295: 85.1969% ( 22) 00:10:41.401 13941.295 - 14000.873: 85.3775% ( 20) 00:10:41.401 14000.873 - 14060.451: 85.5491% ( 19) 00:10:41.401 14060.451 - 14120.029: 85.6936% ( 16) 00:10:41.401 14120.029 - 14179.607: 85.7840% ( 10) 00:10:41.401 14179.607 - 14239.185: 85.9465% ( 18) 00:10:41.401 14239.185 - 14298.764: 86.1001% ( 17) 00:10:41.401 14298.764 - 14358.342: 86.2807% ( 20) 00:10:41.401 14358.342 - 14417.920: 86.4704% ( 21) 00:10:41.401 14417.920 - 14477.498: 86.6600% ( 21) 00:10:41.401 14477.498 - 14537.076: 86.8407% ( 20) 00:10:41.401 14537.076 - 14596.655: 87.0303% ( 21) 00:10:41.401 14596.655 - 14656.233: 87.2471% ( 24) 00:10:41.401 14656.233 - 14715.811: 87.5271% ( 31) 00:10:41.401 14715.811 - 14775.389: 87.7258% ( 22) 00:10:41.401 14775.389 - 14834.967: 87.9697% ( 27) 00:10:41.401 14834.967 - 14894.545: 88.1864% ( 24) 00:10:41.401 14894.545 - 14954.124: 88.4303% ( 27) 00:10:41.401 14954.124 - 15013.702: 88.6832% ( 28) 00:10:41.401 15013.702 - 15073.280: 88.8819% ( 22) 00:10:41.401 15073.280 - 15132.858: 89.0986% ( 24) 00:10:41.401 15132.858 - 15192.436: 89.2883% ( 21) 00:10:41.401 15192.436 - 15252.015: 89.5683% ( 31) 00:10:41.401 15252.015 - 15371.171: 90.0289% ( 51) 00:10:41.401 15371.171 - 15490.327: 90.3992% ( 41) 00:10:41.401 15490.327 - 15609.484: 90.8960% ( 55) 00:10:41.401 15609.484 - 15728.640: 91.3024% ( 45) 00:10:41.401 15728.640 - 15847.796: 91.7088% ( 45) 00:10:41.401 15847.796 - 15966.953: 92.2327% ( 58) 00:10:41.401 15966.953 - 16086.109: 92.5939% ( 40) 00:10:41.401 16086.109 - 16205.265: 92.9371% ( 38) 00:10:41.401 16205.265 - 16324.422: 93.4068% ( 52) 00:10:41.401 16324.422 - 16443.578: 93.9306% ( 58) 00:10:41.401 16443.578 - 16562.735: 94.4906% ( 62) 00:10:41.401 16562.735 - 16681.891: 95.0145% ( 58) 00:10:41.401 16681.891 - 16801.047: 95.3577% ( 38) 00:10:41.401 16801.047 - 16920.204: 95.7641% ( 45) 00:10:41.401 16920.204 - 17039.360: 96.1344% ( 41) 00:10:41.401 17039.360 - 17158.516: 96.3873% ( 28) 00:10:41.401 17158.516 - 17277.673: 96.6402% ( 28) 00:10:41.401 17277.673 - 17396.829: 96.8931% ( 28) 00:10:41.401 17396.829 - 17515.985: 97.1460% ( 28) 00:10:41.401 17515.985 - 17635.142: 97.4259% ( 31) 00:10:41.401 17635.142 - 17754.298: 97.6156% ( 21) 00:10:41.401 17754.298 - 17873.455: 97.8414% ( 25) 00:10:41.401 17873.455 - 17992.611: 98.0491% ( 23) 00:10:41.401 17992.611 - 18111.767: 98.2478% ( 22) 00:10:41.401 18111.767 - 18230.924: 98.4375% ( 21) 00:10:41.401 18230.924 - 18350.080: 98.5910% ( 17) 00:10:41.401 18350.080 - 18469.236: 98.6814% ( 10) 00:10:41.401 18469.236 - 18588.393: 98.7355% ( 6) 00:10:41.401 18588.393 - 18707.549: 98.7717% ( 4) 00:10:41.401 18707.549 - 18826.705: 98.8078% ( 4) 00:10:41.401 18826.705 - 18945.862: 98.8439% ( 4) 00:10:41.401 28716.684 - 28835.840: 98.8710% ( 3) 00:10:41.401 28835.840 - 28954.996: 98.8891% ( 2) 00:10:41.401 28954.996 - 29074.153: 98.9162% ( 3) 00:10:41.401 29074.153 - 29193.309: 98.9433% ( 3) 00:10:41.401 29193.309 - 29312.465: 98.9704% ( 3) 00:10:41.401 29312.465 - 29431.622: 98.9975% ( 3) 00:10:41.401 29431.622 - 29550.778: 99.0246% ( 3) 00:10:41.401 29550.778 - 29669.935: 99.0517% ( 3) 00:10:41.401 29669.935 - 29789.091: 99.0788% ( 3) 00:10:41.401 29789.091 - 29908.247: 99.1059% ( 3) 00:10:41.401 29908.247 - 30027.404: 99.1329% ( 3) 00:10:41.401 30027.404 - 30146.560: 99.1510% ( 2) 00:10:41.401 30146.560 - 30265.716: 99.1781% ( 3) 00:10:41.401 30265.716 - 30384.873: 99.2052% ( 3) 00:10:41.401 30384.873 - 30504.029: 99.2233% ( 2) 00:10:41.401 30504.029 - 30742.342: 99.2775% ( 6) 00:10:41.401 30742.342 - 30980.655: 99.3226% ( 5) 00:10:41.401 30980.655 - 31218.967: 99.3768% ( 6) 00:10:41.401 31218.967 - 31457.280: 99.4220% ( 5) 00:10:41.401 36938.473 - 37176.785: 99.4762% ( 6) 00:10:41.401 37176.785 - 37415.098: 99.5394% ( 7) 00:10:41.401 37415.098 - 37653.411: 99.5936% ( 6) 00:10:41.401 37653.411 - 37891.724: 99.6568% ( 7) 00:10:41.401 37891.724 - 38130.036: 99.7200% ( 7) 00:10:41.401 38130.036 - 38368.349: 99.7742% ( 6) 00:10:41.401 38368.349 - 38606.662: 99.8374% ( 7) 00:10:41.401 38606.662 - 38844.975: 99.8916% ( 6) 00:10:41.401 38844.975 - 39083.287: 99.9458% ( 6) 00:10:41.401 39083.287 - 39321.600: 100.0000% ( 6) 00:10:41.401 00:10:41.401 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:41.401 ============================================================================== 00:10:41.401 Range in us Cumulative IO count 00:10:41.401 8400.524 - 8460.102: 0.0090% ( 1) 00:10:41.401 8460.102 - 8519.680: 0.0813% ( 8) 00:10:41.401 8519.680 - 8579.258: 0.2890% ( 23) 00:10:41.401 8579.258 - 8638.836: 0.6051% ( 35) 00:10:41.401 8638.836 - 8698.415: 1.0567% ( 50) 00:10:41.401 8698.415 - 8757.993: 1.8244% ( 85) 00:10:41.401 8757.993 - 8817.571: 2.6553% ( 92) 00:10:41.401 8817.571 - 8877.149: 3.4863% ( 92) 00:10:41.401 8877.149 - 8936.727: 4.3262% ( 93) 00:10:41.401 8936.727 - 8996.305: 5.1842% ( 95) 00:10:41.401 8996.305 - 9055.884: 6.0603% ( 97) 00:10:41.401 9055.884 - 9115.462: 6.9635% ( 100) 00:10:41.401 9115.462 - 9175.040: 7.8396% ( 97) 00:10:41.401 9175.040 - 9234.618: 8.9234% ( 120) 00:10:41.401 9234.618 - 9294.196: 10.0253% ( 122) 00:10:41.401 9294.196 - 9353.775: 11.3259% ( 144) 00:10:41.401 9353.775 - 9413.353: 12.6355% ( 145) 00:10:41.401 9413.353 - 9472.931: 14.1980% ( 173) 00:10:41.401 9472.931 - 9532.509: 15.7695% ( 174) 00:10:41.401 9532.509 - 9592.087: 17.6210% ( 205) 00:10:41.401 9592.087 - 9651.665: 19.4816% ( 206) 00:10:41.401 9651.665 - 9711.244: 21.3331% ( 205) 00:10:41.401 9711.244 - 9770.822: 23.2027% ( 207) 00:10:41.401 9770.822 - 9830.400: 25.1716% ( 218) 00:10:41.402 9830.400 - 9889.978: 27.2579% ( 231) 00:10:41.402 9889.978 - 9949.556: 29.4527% ( 243) 00:10:41.402 9949.556 - 10009.135: 31.7467% ( 254) 00:10:41.402 10009.135 - 10068.713: 33.8331% ( 231) 00:10:41.402 10068.713 - 10128.291: 35.8833% ( 227) 00:10:41.402 10128.291 - 10187.869: 38.0238% ( 237) 00:10:41.402 10187.869 - 10247.447: 40.1102% ( 231) 00:10:41.402 10247.447 - 10307.025: 42.0069% ( 210) 00:10:41.402 10307.025 - 10366.604: 43.8313% ( 202) 00:10:41.402 10366.604 - 10426.182: 45.7280% ( 210) 00:10:41.402 10426.182 - 10485.760: 47.6517% ( 213) 00:10:41.402 10485.760 - 10545.338: 49.4491% ( 199) 00:10:41.402 10545.338 - 10604.916: 51.0567% ( 178) 00:10:41.402 10604.916 - 10664.495: 52.5741% ( 168) 00:10:41.402 10664.495 - 10724.073: 54.0462% ( 163) 00:10:41.402 10724.073 - 10783.651: 55.5184% ( 163) 00:10:41.402 10783.651 - 10843.229: 56.7467% ( 136) 00:10:41.402 10843.229 - 10902.807: 58.0564% ( 145) 00:10:41.402 10902.807 - 10962.385: 59.3840% ( 147) 00:10:41.402 10962.385 - 11021.964: 60.4678% ( 120) 00:10:41.402 11021.964 - 11081.542: 61.5788% ( 123) 00:10:41.402 11081.542 - 11141.120: 62.4729% ( 99) 00:10:41.402 11141.120 - 11200.698: 63.2767% ( 89) 00:10:41.402 11200.698 - 11260.276: 64.1528% ( 97) 00:10:41.402 11260.276 - 11319.855: 64.8754% ( 80) 00:10:41.402 11319.855 - 11379.433: 65.5076% ( 70) 00:10:41.402 11379.433 - 11439.011: 66.1127% ( 67) 00:10:41.402 11439.011 - 11498.589: 66.6727% ( 62) 00:10:41.402 11498.589 - 11558.167: 67.2056% ( 59) 00:10:41.402 11558.167 - 11617.745: 67.7294% ( 58) 00:10:41.402 11617.745 - 11677.324: 68.2171% ( 54) 00:10:41.402 11677.324 - 11736.902: 68.6777% ( 51) 00:10:41.402 11736.902 - 11796.480: 69.1745% ( 55) 00:10:41.402 11796.480 - 11856.058: 69.6351% ( 51) 00:10:41.402 11856.058 - 11915.636: 70.0777% ( 49) 00:10:41.402 11915.636 - 11975.215: 70.5925% ( 57) 00:10:41.402 11975.215 - 12034.793: 71.0170% ( 47) 00:10:41.402 12034.793 - 12094.371: 71.5408% ( 58) 00:10:41.402 12094.371 - 12153.949: 72.0827% ( 60) 00:10:41.402 12153.949 - 12213.527: 72.5704% ( 54) 00:10:41.402 12213.527 - 12273.105: 73.1846% ( 68) 00:10:41.402 12273.105 - 12332.684: 73.8439% ( 73) 00:10:41.402 12332.684 - 12392.262: 74.5123% ( 74) 00:10:41.402 12392.262 - 12451.840: 75.2710% ( 84) 00:10:41.402 12451.840 - 12511.418: 75.8671% ( 66) 00:10:41.402 12511.418 - 12570.996: 76.4451% ( 64) 00:10:41.402 12570.996 - 12630.575: 76.9057% ( 51) 00:10:41.402 12630.575 - 12690.153: 77.3121% ( 45) 00:10:41.402 12690.153 - 12749.731: 77.7005% ( 43) 00:10:41.402 12749.731 - 12809.309: 78.0798% ( 42) 00:10:41.402 12809.309 - 12868.887: 78.4772% ( 44) 00:10:41.402 12868.887 - 12928.465: 78.8746% ( 44) 00:10:41.402 12928.465 - 12988.044: 79.2359% ( 40) 00:10:41.402 12988.044 - 13047.622: 79.6965% ( 51) 00:10:41.402 13047.622 - 13107.200: 80.2746% ( 64) 00:10:41.402 13107.200 - 13166.778: 80.7894% ( 57) 00:10:41.402 13166.778 - 13226.356: 81.2229% ( 48) 00:10:41.402 13226.356 - 13285.935: 81.6655% ( 49) 00:10:41.402 13285.935 - 13345.513: 82.1351% ( 52) 00:10:41.402 13345.513 - 13405.091: 82.5867% ( 50) 00:10:41.402 13405.091 - 13464.669: 83.0202% ( 48) 00:10:41.402 13464.669 - 13524.247: 83.3996% ( 42) 00:10:41.402 13524.247 - 13583.825: 83.7518% ( 39) 00:10:41.402 13583.825 - 13643.404: 84.1221% ( 41) 00:10:41.402 13643.404 - 13702.982: 84.4473% ( 36) 00:10:41.402 13702.982 - 13762.560: 84.7543% ( 34) 00:10:41.402 13762.560 - 13822.138: 85.0253% ( 30) 00:10:41.402 13822.138 - 13881.716: 85.2330% ( 23) 00:10:41.402 13881.716 - 13941.295: 85.4046% ( 19) 00:10:41.402 13941.295 - 14000.873: 85.5491% ( 16) 00:10:41.402 14000.873 - 14060.451: 85.7207% ( 19) 00:10:41.402 14060.451 - 14120.029: 85.9014% ( 20) 00:10:41.402 14120.029 - 14179.607: 86.0730% ( 19) 00:10:41.402 14179.607 - 14239.185: 86.2085% ( 15) 00:10:41.402 14239.185 - 14298.764: 86.4162% ( 23) 00:10:41.402 14298.764 - 14358.342: 86.5788% ( 18) 00:10:41.402 14358.342 - 14417.920: 86.7594% ( 20) 00:10:41.402 14417.920 - 14477.498: 86.8949% ( 15) 00:10:41.402 14477.498 - 14537.076: 87.0123% ( 13) 00:10:41.402 14537.076 - 14596.655: 87.1929% ( 20) 00:10:41.402 14596.655 - 14656.233: 87.4277% ( 26) 00:10:41.402 14656.233 - 14715.811: 87.7168% ( 32) 00:10:41.402 14715.811 - 14775.389: 88.0238% ( 34) 00:10:41.402 14775.389 - 14834.967: 88.3309% ( 34) 00:10:41.402 14834.967 - 14894.545: 88.5748% ( 27) 00:10:41.402 14894.545 - 14954.124: 88.8548% ( 31) 00:10:41.402 14954.124 - 15013.702: 89.0535% ( 22) 00:10:41.402 15013.702 - 15073.280: 89.2612% ( 23) 00:10:41.402 15073.280 - 15132.858: 89.4418% ( 20) 00:10:41.402 15132.858 - 15192.436: 89.6676% ( 25) 00:10:41.402 15192.436 - 15252.015: 89.8663% ( 22) 00:10:41.402 15252.015 - 15371.171: 90.3902% ( 58) 00:10:41.402 15371.171 - 15490.327: 90.8056% ( 46) 00:10:41.402 15490.327 - 15609.484: 91.2121% ( 45) 00:10:41.402 15609.484 - 15728.640: 91.5462% ( 37) 00:10:41.402 15728.640 - 15847.796: 91.8533% ( 34) 00:10:41.402 15847.796 - 15966.953: 92.1785% ( 36) 00:10:41.402 15966.953 - 16086.109: 92.5578% ( 42) 00:10:41.402 16086.109 - 16205.265: 92.9913% ( 48) 00:10:41.402 16205.265 - 16324.422: 93.4249% ( 48) 00:10:41.402 16324.422 - 16443.578: 93.9306% ( 56) 00:10:41.402 16443.578 - 16562.735: 94.4093% ( 53) 00:10:41.402 16562.735 - 16681.891: 94.9422% ( 59) 00:10:41.402 16681.891 - 16801.047: 95.4118% ( 52) 00:10:41.402 16801.047 - 16920.204: 95.7641% ( 39) 00:10:41.402 16920.204 - 17039.360: 96.1163% ( 39) 00:10:41.402 17039.360 - 17158.516: 96.3512% ( 26) 00:10:41.402 17158.516 - 17277.673: 96.5589% ( 23) 00:10:41.402 17277.673 - 17396.829: 96.7305% ( 19) 00:10:41.402 17396.829 - 17515.985: 96.9382% ( 23) 00:10:41.402 17515.985 - 17635.142: 97.1821% ( 27) 00:10:41.402 17635.142 - 17754.298: 97.4169% ( 26) 00:10:41.402 17754.298 - 17873.455: 97.6698% ( 28) 00:10:41.402 17873.455 - 17992.611: 97.9046% ( 26) 00:10:41.402 17992.611 - 18111.767: 98.1214% ( 24) 00:10:41.402 18111.767 - 18230.924: 98.3472% ( 25) 00:10:41.402 18230.924 - 18350.080: 98.5278% ( 20) 00:10:41.402 18350.080 - 18469.236: 98.6723% ( 16) 00:10:41.402 18469.236 - 18588.393: 98.7265% ( 6) 00:10:41.402 18588.393 - 18707.549: 98.7626% ( 4) 00:10:41.402 18707.549 - 18826.705: 98.7897% ( 3) 00:10:41.402 18826.705 - 18945.862: 98.8168% ( 3) 00:10:41.402 18945.862 - 19065.018: 98.8349% ( 2) 00:10:41.402 19065.018 - 19184.175: 98.8439% ( 1) 00:10:41.402 26810.182 - 26929.338: 98.8710% ( 3) 00:10:41.402 26929.338 - 27048.495: 98.8981% ( 3) 00:10:41.402 27048.495 - 27167.651: 98.9252% ( 3) 00:10:41.402 27167.651 - 27286.807: 98.9523% ( 3) 00:10:41.402 27286.807 - 27405.964: 98.9794% ( 3) 00:10:41.402 27405.964 - 27525.120: 99.0065% ( 3) 00:10:41.402 27525.120 - 27644.276: 99.0336% ( 3) 00:10:41.402 27644.276 - 27763.433: 99.0607% ( 3) 00:10:41.402 27763.433 - 27882.589: 99.0788% ( 2) 00:10:41.402 27882.589 - 28001.745: 99.1059% ( 3) 00:10:41.402 28001.745 - 28120.902: 99.1239% ( 2) 00:10:41.402 28120.902 - 28240.058: 99.1510% ( 3) 00:10:41.402 28240.058 - 28359.215: 99.1781% ( 3) 00:10:41.402 28359.215 - 28478.371: 99.2052% ( 3) 00:10:41.402 28478.371 - 28597.527: 99.2233% ( 2) 00:10:41.402 28597.527 - 28716.684: 99.2504% ( 3) 00:10:41.402 28716.684 - 28835.840: 99.2775% ( 3) 00:10:41.402 28835.840 - 28954.996: 99.3046% ( 3) 00:10:41.402 28954.996 - 29074.153: 99.3316% ( 3) 00:10:41.402 29074.153 - 29193.309: 99.3587% ( 3) 00:10:41.402 29193.309 - 29312.465: 99.3768% ( 2) 00:10:41.402 29312.465 - 29431.622: 99.4039% ( 3) 00:10:41.402 29431.622 - 29550.778: 99.4220% ( 2) 00:10:41.402 34793.658 - 35031.971: 99.4491% ( 3) 00:10:41.402 35031.971 - 35270.284: 99.5123% ( 7) 00:10:41.402 35270.284 - 35508.596: 99.5665% ( 6) 00:10:41.402 35508.596 - 35746.909: 99.6297% ( 7) 00:10:41.402 35746.909 - 35985.222: 99.6839% ( 6) 00:10:41.402 35985.222 - 36223.535: 99.7471% ( 7) 00:10:41.402 36223.535 - 36461.847: 99.8103% ( 7) 00:10:41.402 36461.847 - 36700.160: 99.8645% ( 6) 00:10:41.402 36700.160 - 36938.473: 99.9277% ( 7) 00:10:41.402 36938.473 - 37176.785: 99.9910% ( 7) 00:10:41.402 37176.785 - 37415.098: 100.0000% ( 1) 00:10:41.402 00:10:41.402 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:41.402 ============================================================================== 00:10:41.402 Range in us Cumulative IO count 00:10:41.402 8460.102 - 8519.680: 0.0181% ( 2) 00:10:41.403 8519.680 - 8579.258: 0.1264% ( 12) 00:10:41.403 8579.258 - 8638.836: 0.3703% ( 27) 00:10:41.403 8638.836 - 8698.415: 0.9032% ( 59) 00:10:41.403 8698.415 - 8757.993: 1.6167% ( 79) 00:10:41.403 8757.993 - 8817.571: 2.5470% ( 103) 00:10:41.403 8817.571 - 8877.149: 3.3869% ( 93) 00:10:41.403 8877.149 - 8936.727: 4.2720% ( 98) 00:10:41.403 8936.727 - 8996.305: 5.2023% ( 103) 00:10:41.403 8996.305 - 9055.884: 6.1507% ( 105) 00:10:41.403 9055.884 - 9115.462: 7.0448% ( 99) 00:10:41.403 9115.462 - 9175.040: 8.0564% ( 112) 00:10:41.403 9175.040 - 9234.618: 9.0228% ( 107) 00:10:41.403 9234.618 - 9294.196: 10.1066% ( 120) 00:10:41.403 9294.196 - 9353.775: 11.2717% ( 129) 00:10:41.403 9353.775 - 9413.353: 12.7168% ( 160) 00:10:41.403 9413.353 - 9472.931: 14.3154% ( 177) 00:10:41.403 9472.931 - 9532.509: 16.0676% ( 194) 00:10:41.403 9532.509 - 9592.087: 17.6301% ( 173) 00:10:41.403 9592.087 - 9651.665: 19.2287% ( 177) 00:10:41.403 9651.665 - 9711.244: 21.0802% ( 205) 00:10:41.403 9711.244 - 9770.822: 23.0040% ( 213) 00:10:41.403 9770.822 - 9830.400: 25.1264% ( 235) 00:10:41.403 9830.400 - 9889.978: 27.3302% ( 244) 00:10:41.403 9889.978 - 9949.556: 29.5430% ( 245) 00:10:41.403 9949.556 - 10009.135: 31.5119% ( 218) 00:10:41.403 10009.135 - 10068.713: 33.5260% ( 223) 00:10:41.403 10068.713 - 10128.291: 35.6033% ( 230) 00:10:41.403 10128.291 - 10187.869: 37.7439% ( 237) 00:10:41.403 10187.869 - 10247.447: 39.8844% ( 237) 00:10:41.403 10247.447 - 10307.025: 41.7901% ( 211) 00:10:41.403 10307.025 - 10366.604: 43.8493% ( 228) 00:10:41.403 10366.604 - 10426.182: 45.7641% ( 212) 00:10:41.403 10426.182 - 10485.760: 47.7421% ( 219) 00:10:41.403 10485.760 - 10545.338: 49.6026% ( 206) 00:10:41.403 10545.338 - 10604.916: 51.3909% ( 198) 00:10:41.403 10604.916 - 10664.495: 52.9444% ( 172) 00:10:41.403 10664.495 - 10724.073: 54.4436% ( 166) 00:10:41.403 10724.073 - 10783.651: 55.9610% ( 168) 00:10:41.403 10783.651 - 10843.229: 57.1983% ( 137) 00:10:41.403 10843.229 - 10902.807: 58.3273% ( 125) 00:10:41.403 10902.807 - 10962.385: 59.3479% ( 113) 00:10:41.403 10962.385 - 11021.964: 60.4137% ( 118) 00:10:41.403 11021.964 - 11081.542: 61.3349% ( 102) 00:10:41.403 11081.542 - 11141.120: 62.2020% ( 96) 00:10:41.403 11141.120 - 11200.698: 63.1051% ( 100) 00:10:41.403 11200.698 - 11260.276: 63.9902% ( 98) 00:10:41.403 11260.276 - 11319.855: 64.7850% ( 88) 00:10:41.403 11319.855 - 11379.433: 65.4715% ( 76) 00:10:41.403 11379.433 - 11439.011: 66.0134% ( 60) 00:10:41.403 11439.011 - 11498.589: 66.5733% ( 62) 00:10:41.403 11498.589 - 11558.167: 67.0159% ( 49) 00:10:41.403 11558.167 - 11617.745: 67.4675% ( 50) 00:10:41.403 11617.745 - 11677.324: 67.9010% ( 48) 00:10:41.403 11677.324 - 11736.902: 68.3255% ( 47) 00:10:41.403 11736.902 - 11796.480: 68.7771% ( 50) 00:10:41.403 11796.480 - 11856.058: 69.1926% ( 46) 00:10:41.403 11856.058 - 11915.636: 69.6983% ( 56) 00:10:41.403 11915.636 - 11975.215: 70.2944% ( 66) 00:10:41.403 11975.215 - 12034.793: 70.7731% ( 53) 00:10:41.403 12034.793 - 12094.371: 71.2428% ( 52) 00:10:41.403 12094.371 - 12153.949: 71.8027% ( 62) 00:10:41.403 12153.949 - 12213.527: 72.4440% ( 71) 00:10:41.403 12213.527 - 12273.105: 72.9498% ( 56) 00:10:41.403 12273.105 - 12332.684: 73.5820% ( 70) 00:10:41.403 12332.684 - 12392.262: 74.2594% ( 75) 00:10:41.403 12392.262 - 12451.840: 75.0090% ( 83) 00:10:41.403 12451.840 - 12511.418: 75.6413% ( 70) 00:10:41.403 12511.418 - 12570.996: 76.3277% ( 76) 00:10:41.403 12570.996 - 12630.575: 77.0322% ( 78) 00:10:41.403 12630.575 - 12690.153: 77.6012% ( 63) 00:10:41.403 12690.153 - 12749.731: 78.1521% ( 61) 00:10:41.403 12749.731 - 12809.309: 78.6398% ( 54) 00:10:41.403 12809.309 - 12868.887: 79.0914% ( 50) 00:10:41.403 12868.887 - 12928.465: 79.4888% ( 44) 00:10:41.403 12928.465 - 12988.044: 79.8862% ( 44) 00:10:41.403 12988.044 - 13047.622: 80.2746% ( 43) 00:10:41.403 13047.622 - 13107.200: 80.6087% ( 37) 00:10:41.403 13107.200 - 13166.778: 80.9790% ( 41) 00:10:41.403 13166.778 - 13226.356: 81.3764% ( 44) 00:10:41.403 13226.356 - 13285.935: 81.7738% ( 44) 00:10:41.403 13285.935 - 13345.513: 82.1983% ( 47) 00:10:41.403 13345.513 - 13405.091: 82.5506% ( 39) 00:10:41.403 13405.091 - 13464.669: 82.9118% ( 40) 00:10:41.403 13464.669 - 13524.247: 83.2912% ( 42) 00:10:41.403 13524.247 - 13583.825: 83.6344% ( 38) 00:10:41.403 13583.825 - 13643.404: 83.9957% ( 40) 00:10:41.403 13643.404 - 13702.982: 84.3298% ( 37) 00:10:41.403 13702.982 - 13762.560: 84.6640% ( 37) 00:10:41.403 13762.560 - 13822.138: 84.8537% ( 21) 00:10:41.403 13822.138 - 13881.716: 85.0343% ( 20) 00:10:41.403 13881.716 - 13941.295: 85.2150% ( 20) 00:10:41.403 13941.295 - 14000.873: 85.4046% ( 21) 00:10:41.403 14000.873 - 14060.451: 85.6124% ( 23) 00:10:41.403 14060.451 - 14120.029: 85.8020% ( 21) 00:10:41.403 14120.029 - 14179.607: 85.9465% ( 16) 00:10:41.403 14179.607 - 14239.185: 86.1272% ( 20) 00:10:41.403 14239.185 - 14298.764: 86.2988% ( 19) 00:10:41.403 14298.764 - 14358.342: 86.4704% ( 19) 00:10:41.403 14358.342 - 14417.920: 86.6691% ( 22) 00:10:41.403 14417.920 - 14477.498: 86.8136% ( 16) 00:10:41.403 14477.498 - 14537.076: 87.0394% ( 25) 00:10:41.403 14537.076 - 14596.655: 87.2742% ( 26) 00:10:41.403 14596.655 - 14656.233: 87.4548% ( 20) 00:10:41.403 14656.233 - 14715.811: 87.6806% ( 25) 00:10:41.403 14715.811 - 14775.389: 87.8793% ( 22) 00:10:41.403 14775.389 - 14834.967: 88.1413% ( 29) 00:10:41.403 14834.967 - 14894.545: 88.5748% ( 48) 00:10:41.403 14894.545 - 14954.124: 88.8909% ( 35) 00:10:41.403 14954.124 - 15013.702: 89.1348% ( 27) 00:10:41.403 15013.702 - 15073.280: 89.3967% ( 29) 00:10:41.403 15073.280 - 15132.858: 89.6405% ( 27) 00:10:41.403 15132.858 - 15192.436: 89.8302% ( 21) 00:10:41.403 15192.436 - 15252.015: 90.0470% ( 24) 00:10:41.403 15252.015 - 15371.171: 90.4805% ( 48) 00:10:41.403 15371.171 - 15490.327: 90.8779% ( 44) 00:10:41.403 15490.327 - 15609.484: 91.2301% ( 39) 00:10:41.403 15609.484 - 15728.640: 91.6185% ( 43) 00:10:41.403 15728.640 - 15847.796: 92.0069% ( 43) 00:10:41.403 15847.796 - 15966.953: 92.4404% ( 48) 00:10:41.403 15966.953 - 16086.109: 92.7294% ( 32) 00:10:41.403 16086.109 - 16205.265: 93.1087% ( 42) 00:10:41.403 16205.265 - 16324.422: 93.5513% ( 49) 00:10:41.403 16324.422 - 16443.578: 94.0842% ( 59) 00:10:41.403 16443.578 - 16562.735: 94.5629% ( 53) 00:10:41.403 16562.735 - 16681.891: 95.0957% ( 59) 00:10:41.403 16681.891 - 16801.047: 95.5202% ( 47) 00:10:41.403 16801.047 - 16920.204: 95.8183% ( 33) 00:10:41.403 16920.204 - 17039.360: 96.0892% ( 30) 00:10:41.403 17039.360 - 17158.516: 96.3421% ( 28) 00:10:41.403 17158.516 - 17277.673: 96.5499% ( 23) 00:10:41.403 17277.673 - 17396.829: 96.7305% ( 20) 00:10:41.403 17396.829 - 17515.985: 96.9202% ( 21) 00:10:41.403 17515.985 - 17635.142: 97.1008% ( 20) 00:10:41.403 17635.142 - 17754.298: 97.2814% ( 20) 00:10:41.403 17754.298 - 17873.455: 97.5163% ( 26) 00:10:41.403 17873.455 - 17992.611: 97.7511% ( 26) 00:10:41.403 17992.611 - 18111.767: 97.9588% ( 23) 00:10:41.403 18111.767 - 18230.924: 98.1575% ( 22) 00:10:41.403 18230.924 - 18350.080: 98.3382% ( 20) 00:10:41.403 18350.080 - 18469.236: 98.4827% ( 16) 00:10:41.403 18469.236 - 18588.393: 98.5820% ( 11) 00:10:41.403 18588.393 - 18707.549: 98.6362% ( 6) 00:10:41.403 18707.549 - 18826.705: 98.6814% ( 5) 00:10:41.403 18826.705 - 18945.862: 98.6994% ( 2) 00:10:41.403 18945.862 - 19065.018: 98.7265% ( 3) 00:10:41.403 19065.018 - 19184.175: 98.7626% ( 4) 00:10:41.403 19184.175 - 19303.331: 98.7807% ( 2) 00:10:41.403 19303.331 - 19422.487: 98.8078% ( 3) 00:10:41.403 19422.487 - 19541.644: 98.8349% ( 3) 00:10:41.403 19541.644 - 19660.800: 98.8439% ( 1) 00:10:41.403 24427.055 - 24546.211: 98.8530% ( 1) 00:10:41.403 24546.211 - 24665.367: 98.8710% ( 2) 00:10:41.403 24665.367 - 24784.524: 98.8981% ( 3) 00:10:41.403 24784.524 - 24903.680: 98.9252% ( 3) 00:10:41.403 24903.680 - 25022.836: 98.9523% ( 3) 00:10:41.403 25022.836 - 25141.993: 98.9794% ( 3) 00:10:41.403 25141.993 - 25261.149: 99.0065% ( 3) 00:10:41.403 25261.149 - 25380.305: 99.0336% ( 3) 00:10:41.403 25380.305 - 25499.462: 99.0517% ( 2) 00:10:41.403 25499.462 - 25618.618: 99.0788% ( 3) 00:10:41.403 25618.618 - 25737.775: 99.1059% ( 3) 00:10:41.403 25737.775 - 25856.931: 99.1329% ( 3) 00:10:41.403 25856.931 - 25976.087: 99.1510% ( 2) 00:10:41.403 25976.087 - 26095.244: 99.1781% ( 3) 00:10:41.403 26095.244 - 26214.400: 99.1962% ( 2) 00:10:41.403 26214.400 - 26333.556: 99.2233% ( 3) 00:10:41.403 26333.556 - 26452.713: 99.2504% ( 3) 00:10:41.403 26452.713 - 26571.869: 99.2684% ( 2) 00:10:41.403 26571.869 - 26691.025: 99.3046% ( 4) 00:10:41.403 26691.025 - 26810.182: 99.3226% ( 2) 00:10:41.403 26810.182 - 26929.338: 99.3497% ( 3) 00:10:41.403 26929.338 - 27048.495: 99.3768% ( 3) 00:10:41.403 27048.495 - 27167.651: 99.4039% ( 3) 00:10:41.403 27167.651 - 27286.807: 99.4220% ( 2) 00:10:41.403 32172.218 - 32410.531: 99.4581% ( 4) 00:10:41.403 32410.531 - 32648.844: 99.5213% ( 7) 00:10:41.403 32648.844 - 32887.156: 99.5845% ( 7) 00:10:41.403 32887.156 - 33125.469: 99.6478% ( 7) 00:10:41.403 33125.469 - 33363.782: 99.7020% ( 6) 00:10:41.403 33363.782 - 33602.095: 99.7652% ( 7) 00:10:41.403 33602.095 - 33840.407: 99.8284% ( 7) 00:10:41.403 33840.407 - 34078.720: 99.8826% ( 6) 00:10:41.403 34078.720 - 34317.033: 99.9458% ( 7) 00:10:41.403 34317.033 - 34555.345: 100.0000% ( 6) 00:10:41.403 00:10:41.404 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:41.404 ============================================================================== 00:10:41.404 Range in us Cumulative IO count 00:10:41.404 8400.524 - 8460.102: 0.0271% ( 3) 00:10:41.404 8460.102 - 8519.680: 0.1084% ( 9) 00:10:41.404 8519.680 - 8579.258: 0.1897% ( 9) 00:10:41.404 8579.258 - 8638.836: 0.3793% ( 21) 00:10:41.404 8638.836 - 8698.415: 0.8219% ( 49) 00:10:41.404 8698.415 - 8757.993: 1.3819% ( 62) 00:10:41.404 8757.993 - 8817.571: 2.1044% ( 80) 00:10:41.404 8817.571 - 8877.149: 2.9444% ( 93) 00:10:41.404 8877.149 - 8936.727: 3.8656% ( 102) 00:10:41.404 8936.727 - 8996.305: 4.9494% ( 120) 00:10:41.404 8996.305 - 9055.884: 5.9158% ( 107) 00:10:41.404 9055.884 - 9115.462: 6.7829% ( 96) 00:10:41.404 9115.462 - 9175.040: 7.7312% ( 105) 00:10:41.404 9175.040 - 9234.618: 8.6976% ( 107) 00:10:41.404 9234.618 - 9294.196: 9.7905% ( 121) 00:10:41.404 9294.196 - 9353.775: 11.0459% ( 139) 00:10:41.404 9353.775 - 9413.353: 12.4097% ( 151) 00:10:41.404 9413.353 - 9472.931: 14.0083% ( 177) 00:10:41.404 9472.931 - 9532.509: 15.5708% ( 173) 00:10:41.404 9532.509 - 9592.087: 17.2417% ( 185) 00:10:41.404 9592.087 - 9651.665: 18.8945% ( 183) 00:10:41.404 9651.665 - 9711.244: 20.7912% ( 210) 00:10:41.404 9711.244 - 9770.822: 22.9137% ( 235) 00:10:41.404 9770.822 - 9830.400: 25.0542% ( 237) 00:10:41.404 9830.400 - 9889.978: 27.2128% ( 239) 00:10:41.404 9889.978 - 9949.556: 29.4978% ( 253) 00:10:41.404 9949.556 - 10009.135: 31.7829% ( 253) 00:10:41.404 10009.135 - 10068.713: 34.1582% ( 263) 00:10:41.404 10068.713 - 10128.291: 36.3259% ( 240) 00:10:41.404 10128.291 - 10187.869: 38.5928% ( 251) 00:10:41.404 10187.869 - 10247.447: 40.7424% ( 238) 00:10:41.404 10247.447 - 10307.025: 42.7565% ( 223) 00:10:41.404 10307.025 - 10366.604: 44.9603% ( 244) 00:10:41.404 10366.604 - 10426.182: 46.9021% ( 215) 00:10:41.404 10426.182 - 10485.760: 48.8259% ( 213) 00:10:41.404 10485.760 - 10545.338: 50.6142% ( 198) 00:10:41.405 10545.338 - 10604.916: 52.4566% ( 204) 00:10:41.405 10604.916 - 10664.495: 54.0643% ( 178) 00:10:41.405 10664.495 - 10724.073: 55.4010% ( 148) 00:10:41.405 10724.073 - 10783.651: 56.6835% ( 142) 00:10:41.405 10783.651 - 10843.229: 57.9660% ( 142) 00:10:41.405 10843.229 - 10902.807: 59.1582% ( 132) 00:10:41.405 10902.807 - 10962.385: 60.1517% ( 110) 00:10:41.405 10962.385 - 11021.964: 61.0910% ( 104) 00:10:41.405 11021.964 - 11081.542: 61.8226% ( 81) 00:10:41.405 11081.542 - 11141.120: 62.5632% ( 82) 00:10:41.405 11141.120 - 11200.698: 63.2316% ( 74) 00:10:41.405 11200.698 - 11260.276: 63.8819% ( 72) 00:10:41.405 11260.276 - 11319.855: 64.5231% ( 71) 00:10:41.405 11319.855 - 11379.433: 65.0831% ( 62) 00:10:41.405 11379.433 - 11439.011: 65.5527% ( 52) 00:10:41.405 11439.011 - 11498.589: 66.0585% ( 56) 00:10:41.405 11498.589 - 11558.167: 66.5553% ( 55) 00:10:41.405 11558.167 - 11617.745: 67.1243% ( 63) 00:10:41.405 11617.745 - 11677.324: 67.6391% ( 57) 00:10:41.405 11677.324 - 11736.902: 68.0816% ( 49) 00:10:41.405 11736.902 - 11796.480: 68.5513% ( 52) 00:10:41.405 11796.480 - 11856.058: 68.9848% ( 48) 00:10:41.405 11856.058 - 11915.636: 69.4184% ( 48) 00:10:41.405 11915.636 - 11975.215: 69.8970% ( 53) 00:10:41.405 11975.215 - 12034.793: 70.4751% ( 64) 00:10:41.405 12034.793 - 12094.371: 71.0441% ( 63) 00:10:41.405 12094.371 - 12153.949: 71.6402% ( 66) 00:10:41.405 12153.949 - 12213.527: 72.2182% ( 64) 00:10:41.405 12213.527 - 12273.105: 72.9859% ( 85) 00:10:41.405 12273.105 - 12332.684: 73.6904% ( 78) 00:10:41.405 12332.684 - 12392.262: 74.4491% ( 84) 00:10:41.405 12392.262 - 12451.840: 75.1445% ( 77) 00:10:41.405 12451.840 - 12511.418: 75.8038% ( 73) 00:10:41.405 12511.418 - 12570.996: 76.4451% ( 71) 00:10:41.405 12570.996 - 12630.575: 76.9599% ( 57) 00:10:41.405 12630.575 - 12690.153: 77.5650% ( 67) 00:10:41.405 12690.153 - 12749.731: 78.0618% ( 55) 00:10:41.405 12749.731 - 12809.309: 78.5224% ( 51) 00:10:41.405 12809.309 - 12868.887: 78.9017% ( 42) 00:10:41.405 12868.887 - 12928.465: 79.3533% ( 50) 00:10:41.405 12928.465 - 12988.044: 79.7056% ( 39) 00:10:41.405 12988.044 - 13047.622: 80.0939% ( 43) 00:10:41.405 13047.622 - 13107.200: 80.4642% ( 41) 00:10:41.405 13107.200 - 13166.778: 80.8345% ( 41) 00:10:41.405 13166.778 - 13226.356: 81.1777% ( 38) 00:10:41.405 13226.356 - 13285.935: 81.5480% ( 41) 00:10:41.405 13285.935 - 13345.513: 82.0087% ( 51) 00:10:41.405 13345.513 - 13405.091: 82.4332% ( 47) 00:10:41.405 13405.091 - 13464.669: 82.7944% ( 40) 00:10:41.405 13464.669 - 13524.247: 83.2009% ( 45) 00:10:41.405 13524.247 - 13583.825: 83.5712% ( 41) 00:10:41.405 13583.825 - 13643.404: 83.9144% ( 38) 00:10:41.405 13643.404 - 13702.982: 84.2215% ( 34) 00:10:41.405 13702.982 - 13762.560: 84.4563% ( 26) 00:10:41.405 13762.560 - 13822.138: 84.6460% ( 21) 00:10:41.405 13822.138 - 13881.716: 84.8085% ( 18) 00:10:41.405 13881.716 - 13941.295: 85.0072% ( 22) 00:10:41.405 13941.295 - 14000.873: 85.1879% ( 20) 00:10:41.405 14000.873 - 14060.451: 85.3775% ( 21) 00:10:41.405 14060.451 - 14120.029: 85.6033% ( 25) 00:10:41.405 14120.029 - 14179.607: 85.7659% ( 18) 00:10:41.405 14179.607 - 14239.185: 85.9646% ( 22) 00:10:41.405 14239.185 - 14298.764: 86.1543% ( 21) 00:10:41.405 14298.764 - 14358.342: 86.3891% ( 26) 00:10:41.405 14358.342 - 14417.920: 86.5697% ( 20) 00:10:41.405 14417.920 - 14477.498: 86.7594% ( 21) 00:10:41.405 14477.498 - 14537.076: 86.9220% ( 18) 00:10:41.405 14537.076 - 14596.655: 87.1207% ( 22) 00:10:41.405 14596.655 - 14656.233: 87.3736% ( 28) 00:10:41.405 14656.233 - 14715.811: 87.6264% ( 28) 00:10:41.405 14715.811 - 14775.389: 87.9064% ( 31) 00:10:41.405 14775.389 - 14834.967: 88.2135% ( 34) 00:10:41.405 14834.967 - 14894.545: 88.6109% ( 44) 00:10:41.405 14894.545 - 14954.124: 88.9812% ( 41) 00:10:41.405 14954.124 - 15013.702: 89.2160% ( 26) 00:10:41.405 15013.702 - 15073.280: 89.3876% ( 19) 00:10:41.405 15073.280 - 15132.858: 89.6134% ( 25) 00:10:41.405 15132.858 - 15192.436: 89.8302% ( 24) 00:10:41.405 15192.436 - 15252.015: 90.0018% ( 19) 00:10:41.405 15252.015 - 15371.171: 90.2547% ( 28) 00:10:41.405 15371.171 - 15490.327: 90.6702% ( 46) 00:10:41.405 15490.327 - 15609.484: 91.1127% ( 49) 00:10:41.405 15609.484 - 15728.640: 91.5462% ( 48) 00:10:41.405 15728.640 - 15847.796: 91.9527% ( 45) 00:10:41.405 15847.796 - 15966.953: 92.4404% ( 54) 00:10:41.405 15966.953 - 16086.109: 92.7113% ( 30) 00:10:41.405 16086.109 - 16205.265: 93.1178% ( 45) 00:10:41.405 16205.265 - 16324.422: 93.5332% ( 46) 00:10:41.405 16324.422 - 16443.578: 94.0932% ( 62) 00:10:41.405 16443.578 - 16562.735: 94.5809% ( 54) 00:10:41.405 16562.735 - 16681.891: 94.9783% ( 44) 00:10:41.405 16681.891 - 16801.047: 95.4028% ( 47) 00:10:41.405 16801.047 - 16920.204: 95.7912% ( 43) 00:10:41.405 16920.204 - 17039.360: 96.0802% ( 32) 00:10:41.405 17039.360 - 17158.516: 96.3331% ( 28) 00:10:41.405 17158.516 - 17277.673: 96.5318% ( 22) 00:10:41.405 17277.673 - 17396.829: 96.7666% ( 26) 00:10:41.405 17396.829 - 17515.985: 96.9473% ( 20) 00:10:41.405 17515.985 - 17635.142: 97.1189% ( 19) 00:10:41.405 17635.142 - 17754.298: 97.3176% ( 22) 00:10:41.405 17754.298 - 17873.455: 97.4982% ( 20) 00:10:41.405 17873.455 - 17992.611: 97.6788% ( 20) 00:10:41.405 17992.611 - 18111.767: 97.8595% ( 20) 00:10:41.405 18111.767 - 18230.924: 98.0220% ( 18) 00:10:41.405 18230.924 - 18350.080: 98.1936% ( 19) 00:10:41.405 18350.080 - 18469.236: 98.3382% ( 16) 00:10:41.405 18469.236 - 18588.393: 98.4375% ( 11) 00:10:41.405 18588.393 - 18707.549: 98.4736% ( 4) 00:10:41.405 18707.549 - 18826.705: 98.5188% ( 5) 00:10:41.405 18826.705 - 18945.862: 98.5730% ( 6) 00:10:41.405 18945.862 - 19065.018: 98.6181% ( 5) 00:10:41.405 19065.018 - 19184.175: 98.6904% ( 8) 00:10:41.405 19184.175 - 19303.331: 98.7085% ( 2) 00:10:41.405 19303.331 - 19422.487: 98.7265% ( 2) 00:10:41.405 19422.487 - 19541.644: 98.7626% ( 4) 00:10:41.405 19541.644 - 19660.800: 98.7897% ( 3) 00:10:41.405 19660.800 - 19779.956: 98.8168% ( 3) 00:10:41.405 19779.956 - 19899.113: 98.8349% ( 2) 00:10:41.405 19899.113 - 20018.269: 98.8439% ( 1) 00:10:41.405 22282.240 - 22401.396: 98.8530% ( 1) 00:10:41.405 22639.709 - 22758.865: 98.8620% ( 1) 00:10:41.405 22997.178 - 23116.335: 98.8981% ( 4) 00:10:41.405 23116.335 - 23235.491: 98.9704% ( 8) 00:10:41.405 23235.491 - 23354.647: 99.0155% ( 5) 00:10:41.405 23354.647 - 23473.804: 99.0878% ( 8) 00:10:41.405 23473.804 - 23592.960: 99.1329% ( 5) 00:10:41.405 23592.960 - 23712.116: 99.1600% ( 3) 00:10:41.405 23712.116 - 23831.273: 99.1781% ( 2) 00:10:41.405 23831.273 - 23950.429: 99.2052% ( 3) 00:10:41.405 23950.429 - 24069.585: 99.2142% ( 1) 00:10:41.405 24069.585 - 24188.742: 99.2323% ( 2) 00:10:41.405 24188.742 - 24307.898: 99.2594% ( 3) 00:10:41.405 24307.898 - 24427.055: 99.2775% ( 2) 00:10:41.405 24427.055 - 24546.211: 99.3046% ( 3) 00:10:41.405 24546.211 - 24665.367: 99.3226% ( 2) 00:10:41.405 24665.367 - 24784.524: 99.3407% ( 2) 00:10:41.405 24784.524 - 24903.680: 99.3678% ( 3) 00:10:41.405 24903.680 - 25022.836: 99.3949% ( 3) 00:10:41.405 25022.836 - 25141.993: 99.4129% ( 2) 00:10:41.405 25141.993 - 25261.149: 99.4220% ( 1) 00:10:41.405 28120.902 - 28240.058: 99.5123% ( 10) 00:10:41.405 29669.935 - 29789.091: 99.5213% ( 1) 00:10:41.405 29789.091 - 29908.247: 99.5484% ( 3) 00:10:41.405 29908.247 - 30027.404: 99.5755% ( 3) 00:10:41.405 30027.404 - 30146.560: 99.6026% ( 3) 00:10:41.405 30146.560 - 30265.716: 99.6297% ( 3) 00:10:41.405 30265.716 - 30384.873: 99.6568% ( 3) 00:10:41.405 30384.873 - 30504.029: 99.6839% ( 3) 00:10:41.405 30504.029 - 30742.342: 99.7471% ( 7) 00:10:41.405 30742.342 - 30980.655: 99.8013% ( 6) 00:10:41.405 30980.655 - 31218.967: 99.8736% ( 8) 00:10:41.405 31218.967 - 31457.280: 99.9368% ( 7) 00:10:41.405 31457.280 - 31695.593: 99.9819% ( 5) 00:10:41.405 31695.593 - 31933.905: 100.0000% ( 2) 00:10:41.405 00:10:41.405 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:41.405 ============================================================================== 00:10:41.405 Range in us Cumulative IO count 00:10:41.405 8460.102 - 8519.680: 0.0181% ( 2) 00:10:41.405 8519.680 - 8579.258: 0.1355% ( 13) 00:10:41.406 8579.258 - 8638.836: 0.4967% ( 40) 00:10:41.406 8638.836 - 8698.415: 1.0477% ( 61) 00:10:41.406 8698.415 - 8757.993: 1.7160% ( 74) 00:10:41.406 8757.993 - 8817.571: 2.5650% ( 94) 00:10:41.406 8817.571 - 8877.149: 3.4050% ( 93) 00:10:41.406 8877.149 - 8936.727: 4.1998% ( 88) 00:10:41.406 8936.727 - 8996.305: 4.9494% ( 83) 00:10:41.406 8996.305 - 9055.884: 5.7081% ( 84) 00:10:41.406 9055.884 - 9115.462: 6.5842% ( 97) 00:10:41.406 9115.462 - 9175.040: 7.5957% ( 112) 00:10:41.406 9175.040 - 9234.618: 8.5712% ( 108) 00:10:41.406 9234.618 - 9294.196: 9.8447% ( 141) 00:10:41.406 9294.196 - 9353.775: 11.2355% ( 154) 00:10:41.406 9353.775 - 9413.353: 12.5903% ( 150) 00:10:41.406 9413.353 - 9472.931: 14.0806% ( 165) 00:10:41.406 9472.931 - 9532.509: 15.5257% ( 160) 00:10:41.406 9532.509 - 9592.087: 17.2778% ( 194) 00:10:41.406 9592.087 - 9651.665: 19.4184% ( 237) 00:10:41.406 9651.665 - 9711.244: 21.5860% ( 240) 00:10:41.406 9711.244 - 9770.822: 23.8168% ( 247) 00:10:41.406 9770.822 - 9830.400: 25.7045% ( 209) 00:10:41.406 9830.400 - 9889.978: 27.7005% ( 221) 00:10:41.406 9889.978 - 9949.556: 29.7417% ( 226) 00:10:41.406 9949.556 - 10009.135: 31.7377% ( 221) 00:10:41.406 10009.135 - 10068.713: 34.0047% ( 251) 00:10:41.406 10068.713 - 10128.291: 36.3168% ( 256) 00:10:41.406 10128.291 - 10187.869: 38.4393% ( 235) 00:10:41.406 10187.869 - 10247.447: 40.7063% ( 251) 00:10:41.406 10247.447 - 10307.025: 42.9462% ( 248) 00:10:41.406 10307.025 - 10366.604: 45.0145% ( 229) 00:10:41.406 10366.604 - 10426.182: 46.9382% ( 213) 00:10:41.406 10426.182 - 10485.760: 48.7446% ( 200) 00:10:41.406 10485.760 - 10545.338: 50.3432% ( 177) 00:10:41.406 10545.338 - 10604.916: 51.9870% ( 182) 00:10:41.406 10604.916 - 10664.495: 53.6669% ( 186) 00:10:41.406 10664.495 - 10724.073: 55.1030% ( 159) 00:10:41.406 10724.073 - 10783.651: 56.3493% ( 138) 00:10:41.406 10783.651 - 10843.229: 57.4874% ( 126) 00:10:41.406 10843.229 - 10902.807: 58.5260% ( 115) 00:10:41.406 10902.807 - 10962.385: 59.5647% ( 115) 00:10:41.406 10962.385 - 11021.964: 60.4769% ( 101) 00:10:41.406 11021.964 - 11081.542: 61.1723% ( 77) 00:10:41.406 11081.542 - 11141.120: 61.9129% ( 82) 00:10:41.406 11141.120 - 11200.698: 62.6084% ( 77) 00:10:41.406 11200.698 - 11260.276: 63.3580% ( 83) 00:10:41.406 11260.276 - 11319.855: 63.9993% ( 71) 00:10:41.406 11319.855 - 11379.433: 64.5683% ( 63) 00:10:41.406 11379.433 - 11439.011: 65.2276% ( 73) 00:10:41.406 11439.011 - 11498.589: 65.9050% ( 75) 00:10:41.406 11498.589 - 11558.167: 66.5372% ( 70) 00:10:41.406 11558.167 - 11617.745: 67.2598% ( 80) 00:10:41.406 11617.745 - 11677.324: 67.8920% ( 70) 00:10:41.406 11677.324 - 11736.902: 68.4158% ( 58) 00:10:41.406 11736.902 - 11796.480: 68.9216% ( 56) 00:10:41.406 11796.480 - 11856.058: 69.3732% ( 50) 00:10:41.406 11856.058 - 11915.636: 69.7887% ( 46) 00:10:41.406 11915.636 - 11975.215: 70.1861% ( 44) 00:10:41.406 11975.215 - 12034.793: 70.5925% ( 45) 00:10:41.406 12034.793 - 12094.371: 70.9809% ( 43) 00:10:41.406 12094.371 - 12153.949: 71.3783% ( 44) 00:10:41.406 12153.949 - 12213.527: 71.7847% ( 45) 00:10:41.406 12213.527 - 12273.105: 72.2453% ( 51) 00:10:41.406 12273.105 - 12332.684: 72.7782% ( 59) 00:10:41.406 12332.684 - 12392.262: 73.5549% ( 86) 00:10:41.406 12392.262 - 12451.840: 74.1600% ( 67) 00:10:41.406 12451.840 - 12511.418: 74.8555% ( 77) 00:10:41.406 12511.418 - 12570.996: 75.4787% ( 69) 00:10:41.406 12570.996 - 12630.575: 76.2374% ( 84) 00:10:41.406 12630.575 - 12690.153: 76.8605% ( 69) 00:10:41.406 12690.153 - 12749.731: 77.5108% ( 72) 00:10:41.406 12749.731 - 12809.309: 77.9895% ( 53) 00:10:41.406 12809.309 - 12868.887: 78.5134% ( 58) 00:10:41.406 12868.887 - 12928.465: 78.9559% ( 49) 00:10:41.406 12928.465 - 12988.044: 79.3262% ( 41) 00:10:41.406 12988.044 - 13047.622: 79.6965% ( 41) 00:10:41.406 13047.622 - 13107.200: 80.0939% ( 44) 00:10:41.406 13107.200 - 13166.778: 80.4462% ( 39) 00:10:41.406 13166.778 - 13226.356: 80.8074% ( 40) 00:10:41.406 13226.356 - 13285.935: 81.1507% ( 38) 00:10:41.406 13285.935 - 13345.513: 81.4306% ( 31) 00:10:41.406 13345.513 - 13405.091: 81.7829% ( 39) 00:10:41.406 13405.091 - 13464.669: 82.1622% ( 42) 00:10:41.406 13464.669 - 13524.247: 82.5054% ( 38) 00:10:41.406 13524.247 - 13583.825: 82.9028% ( 44) 00:10:41.406 13583.825 - 13643.404: 83.2822% ( 42) 00:10:41.406 13643.404 - 13702.982: 83.6525% ( 41) 00:10:41.406 13702.982 - 13762.560: 83.9686% ( 35) 00:10:41.406 13762.560 - 13822.138: 84.2486% ( 31) 00:10:41.406 13822.138 - 13881.716: 84.4653% ( 24) 00:10:41.406 13881.716 - 13941.295: 84.6640% ( 22) 00:10:41.406 13941.295 - 14000.873: 85.0163% ( 39) 00:10:41.406 14000.873 - 14060.451: 85.1969% ( 20) 00:10:41.406 14060.451 - 14120.029: 85.4227% ( 25) 00:10:41.406 14120.029 - 14179.607: 85.6124% ( 21) 00:10:41.406 14179.607 - 14239.185: 85.9104% ( 33) 00:10:41.406 14239.185 - 14298.764: 86.1814% ( 30) 00:10:41.406 14298.764 - 14358.342: 86.3981% ( 24) 00:10:41.406 14358.342 - 14417.920: 86.6059% ( 23) 00:10:41.406 14417.920 - 14477.498: 86.8226% ( 24) 00:10:41.406 14477.498 - 14537.076: 87.0123% ( 21) 00:10:41.406 14537.076 - 14596.655: 87.1839% ( 19) 00:10:41.406 14596.655 - 14656.233: 87.3916% ( 23) 00:10:41.406 14656.233 - 14715.811: 87.6716% ( 31) 00:10:41.406 14715.811 - 14775.389: 87.9064% ( 26) 00:10:41.406 14775.389 - 14834.967: 88.1954% ( 32) 00:10:41.406 14834.967 - 14894.545: 88.4664% ( 30) 00:10:41.406 14894.545 - 14954.124: 88.7735% ( 34) 00:10:41.406 14954.124 - 15013.702: 88.9812% ( 23) 00:10:41.406 15013.702 - 15073.280: 89.2160% ( 26) 00:10:41.406 15073.280 - 15132.858: 89.3876% ( 19) 00:10:41.406 15132.858 - 15192.436: 89.5863% ( 22) 00:10:41.406 15192.436 - 15252.015: 89.9115% ( 36) 00:10:41.406 15252.015 - 15371.171: 90.2637% ( 39) 00:10:41.406 15371.171 - 15490.327: 90.6340% ( 41) 00:10:41.406 15490.327 - 15609.484: 91.0134% ( 42) 00:10:41.406 15609.484 - 15728.640: 91.4469% ( 48) 00:10:41.406 15728.640 - 15847.796: 91.9075% ( 51) 00:10:41.406 15847.796 - 15966.953: 92.3230% ( 46) 00:10:41.406 15966.953 - 16086.109: 92.7023% ( 42) 00:10:41.406 16086.109 - 16205.265: 93.0275% ( 36) 00:10:41.406 16205.265 - 16324.422: 93.5332% ( 56) 00:10:41.406 16324.422 - 16443.578: 94.1474% ( 68) 00:10:41.406 16443.578 - 16562.735: 94.6983% ( 61) 00:10:41.406 16562.735 - 16681.891: 95.2222% ( 58) 00:10:41.406 16681.891 - 16801.047: 95.6105% ( 43) 00:10:41.406 16801.047 - 16920.204: 96.1163% ( 56) 00:10:41.406 16920.204 - 17039.360: 96.3783% ( 29) 00:10:41.406 17039.360 - 17158.516: 96.6492% ( 30) 00:10:41.406 17158.516 - 17277.673: 96.8931% ( 27) 00:10:41.406 17277.673 - 17396.829: 97.1640% ( 30) 00:10:41.406 17396.829 - 17515.985: 97.4350% ( 30) 00:10:41.406 17515.985 - 17635.142: 97.6788% ( 27) 00:10:41.406 17635.142 - 17754.298: 97.9046% ( 25) 00:10:41.406 17754.298 - 17873.455: 98.0672% ( 18) 00:10:41.406 17873.455 - 17992.611: 98.2117% ( 16) 00:10:41.406 17992.611 - 18111.767: 98.3833% ( 19) 00:10:41.406 18111.767 - 18230.924: 98.5278% ( 16) 00:10:41.406 18230.924 - 18350.080: 98.6814% ( 17) 00:10:41.406 18350.080 - 18469.236: 98.7897% ( 12) 00:10:41.406 18469.236 - 18588.393: 98.8439% ( 6) 00:10:41.406 20018.269 - 20137.425: 98.8710% ( 3) 00:10:41.406 20137.425 - 20256.582: 98.9072% ( 4) 00:10:41.406 20256.582 - 20375.738: 98.9252% ( 2) 00:10:41.406 20375.738 - 20494.895: 98.9523% ( 3) 00:10:41.406 20494.895 - 20614.051: 98.9704% ( 2) 00:10:41.406 20614.051 - 20733.207: 98.9975% ( 3) 00:10:41.406 20733.207 - 20852.364: 99.0246% ( 3) 00:10:41.406 20852.364 - 20971.520: 99.0426% ( 2) 00:10:41.406 20971.520 - 21090.676: 99.0697% ( 3) 00:10:41.406 21090.676 - 21209.833: 99.0878% ( 2) 00:10:41.406 21209.833 - 21328.989: 99.1149% ( 3) 00:10:41.406 21328.989 - 21448.145: 99.1329% ( 2) 00:10:41.406 21448.145 - 21567.302: 99.1691% ( 4) 00:10:41.406 21567.302 - 21686.458: 99.1962% ( 3) 00:10:41.406 21686.458 - 21805.615: 99.2323% ( 4) 00:10:41.406 21805.615 - 21924.771: 99.2594% ( 3) 00:10:41.406 21924.771 - 22043.927: 99.2955% ( 4) 00:10:41.406 22043.927 - 22163.084: 99.3226% ( 3) 00:10:41.406 22163.084 - 22282.240: 99.3587% ( 4) 00:10:41.406 22282.240 - 22401.396: 99.3949% ( 4) 00:10:41.406 22401.396 - 22520.553: 99.4220% ( 3) 00:10:41.406 26810.182 - 26929.338: 99.4491% ( 3) 00:10:41.407 26929.338 - 27048.495: 99.4762% ( 3) 00:10:41.407 27048.495 - 27167.651: 99.5123% ( 4) 00:10:41.407 27167.651 - 27286.807: 99.5394% ( 3) 00:10:41.407 27286.807 - 27405.964: 99.5755% ( 4) 00:10:41.407 27405.964 - 27525.120: 99.6026% ( 3) 00:10:41.407 27525.120 - 27644.276: 99.6297% ( 3) 00:10:41.407 27644.276 - 27763.433: 99.6658% ( 4) 00:10:41.407 27763.433 - 27882.589: 99.6929% ( 3) 00:10:41.407 27882.589 - 28001.745: 99.7200% ( 3) 00:10:41.407 28001.745 - 28120.902: 99.7561% ( 4) 00:10:41.407 28120.902 - 28240.058: 99.7832% ( 3) 00:10:41.407 28240.058 - 28359.215: 99.8103% ( 3) 00:10:41.407 28359.215 - 28478.371: 99.8374% ( 3) 00:10:41.407 28478.371 - 28597.527: 99.8736% ( 4) 00:10:41.407 28597.527 - 28716.684: 99.9097% ( 4) 00:10:41.407 28716.684 - 28835.840: 99.9368% ( 3) 00:10:41.407 28835.840 - 28954.996: 99.9639% ( 3) 00:10:41.407 28954.996 - 29074.153: 100.0000% ( 4) 00:10:41.407 00:10:41.407 09:59:30 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:41.407 00:10:41.407 real 0m2.686s 00:10:41.407 user 0m2.291s 00:10:41.407 sys 0m0.279s 00:10:41.407 09:59:30 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:41.407 09:59:30 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:41.407 ************************************ 00:10:41.407 END TEST nvme_perf 00:10:41.407 ************************************ 00:10:41.407 09:59:30 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:41.407 09:59:30 nvme -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:10:41.407 09:59:30 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:41.407 09:59:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:41.407 ************************************ 00:10:41.407 START TEST nvme_hello_world 00:10:41.407 ************************************ 00:10:41.407 09:59:30 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:41.666 Initializing NVMe Controllers 00:10:41.666 Attached to 0000:00:10.0 00:10:41.666 Namespace ID: 1 size: 6GB 00:10:41.666 Attached to 0000:00:11.0 00:10:41.666 Namespace ID: 1 size: 5GB 00:10:41.666 Attached to 0000:00:13.0 00:10:41.666 Namespace ID: 1 size: 1GB 00:10:41.666 Attached to 0000:00:12.0 00:10:41.666 Namespace ID: 1 size: 4GB 00:10:41.666 Namespace ID: 2 size: 4GB 00:10:41.666 Namespace ID: 3 size: 4GB 00:10:41.666 Initialization complete. 00:10:41.666 INFO: using host memory buffer for IO 00:10:41.666 Hello world! 00:10:41.666 INFO: using host memory buffer for IO 00:10:41.666 Hello world! 00:10:41.666 INFO: using host memory buffer for IO 00:10:41.666 Hello world! 00:10:41.666 INFO: using host memory buffer for IO 00:10:41.666 Hello world! 00:10:41.666 INFO: using host memory buffer for IO 00:10:41.666 Hello world! 00:10:41.666 INFO: using host memory buffer for IO 00:10:41.666 Hello world! 00:10:41.666 ************************************ 00:10:41.666 END TEST nvme_hello_world 00:10:41.666 ************************************ 00:10:41.666 00:10:41.666 real 0m0.306s 00:10:41.666 user 0m0.125s 00:10:41.666 sys 0m0.135s 00:10:41.666 09:59:30 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:41.666 09:59:30 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:41.666 09:59:31 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:41.666 09:59:31 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:41.666 09:59:31 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:41.666 09:59:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:41.666 ************************************ 00:10:41.666 START TEST nvme_sgl 00:10:41.666 ************************************ 00:10:41.666 09:59:31 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:41.925 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:41.925 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:41.925 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:41.925 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:41.925 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:41.925 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:41.925 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:41.925 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:41.925 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:41.925 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:41.925 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:41.925 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:41.925 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:41.925 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:41.925 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:41.925 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:41.925 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:41.925 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:41.925 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:41.925 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:41.925 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:41.925 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:41.925 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:41.925 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:41.925 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:41.925 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:41.925 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:41.925 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:41.925 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:41.925 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:41.925 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:41.925 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:41.925 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:41.925 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:41.925 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:41.925 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:41.925 NVMe Readv/Writev Request test 00:10:41.925 Attached to 0000:00:10.0 00:10:41.925 Attached to 0000:00:11.0 00:10:41.925 Attached to 0000:00:13.0 00:10:41.925 Attached to 0000:00:12.0 00:10:41.925 0000:00:10.0: build_io_request_2 test passed 00:10:41.925 0000:00:10.0: build_io_request_4 test passed 00:10:41.925 0000:00:10.0: build_io_request_5 test passed 00:10:41.925 0000:00:10.0: build_io_request_6 test passed 00:10:41.925 0000:00:10.0: build_io_request_7 test passed 00:10:41.925 0000:00:10.0: build_io_request_10 test passed 00:10:41.925 0000:00:11.0: build_io_request_2 test passed 00:10:41.925 0000:00:11.0: build_io_request_4 test passed 00:10:41.925 0000:00:11.0: build_io_request_5 test passed 00:10:41.925 0000:00:11.0: build_io_request_6 test passed 00:10:41.925 0000:00:11.0: build_io_request_7 test passed 00:10:41.925 0000:00:11.0: build_io_request_10 test passed 00:10:41.925 Cleaning up... 00:10:41.925 00:10:41.925 real 0m0.384s 00:10:41.925 user 0m0.207s 00:10:41.925 sys 0m0.126s 00:10:41.926 09:59:31 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:41.926 09:59:31 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:41.926 ************************************ 00:10:41.926 END TEST nvme_sgl 00:10:41.926 ************************************ 00:10:42.184 09:59:31 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:42.184 09:59:31 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:42.184 09:59:31 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:42.184 09:59:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:42.184 ************************************ 00:10:42.184 START TEST nvme_e2edp 00:10:42.184 ************************************ 00:10:42.184 09:59:31 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:42.443 NVMe Write/Read with End-to-End data protection test 00:10:42.443 Attached to 0000:00:10.0 00:10:42.443 Attached to 0000:00:11.0 00:10:42.443 Attached to 0000:00:13.0 00:10:42.443 Attached to 0000:00:12.0 00:10:42.443 Cleaning up... 00:10:42.443 ************************************ 00:10:42.443 END TEST nvme_e2edp 00:10:42.443 ************************************ 00:10:42.443 00:10:42.443 real 0m0.334s 00:10:42.443 user 0m0.112s 00:10:42.443 sys 0m0.156s 00:10:42.443 09:59:31 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:42.443 09:59:31 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:42.443 09:59:31 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:42.443 09:59:31 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:42.443 09:59:31 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:42.443 09:59:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:42.443 ************************************ 00:10:42.443 START TEST nvme_reserve 00:10:42.443 ************************************ 00:10:42.443 09:59:31 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:42.702 ===================================================== 00:10:42.702 NVMe Controller at PCI bus 0, device 16, function 0 00:10:42.702 ===================================================== 00:10:42.702 Reservations: Not Supported 00:10:42.702 ===================================================== 00:10:42.702 NVMe Controller at PCI bus 0, device 17, function 0 00:10:42.702 ===================================================== 00:10:42.702 Reservations: Not Supported 00:10:42.702 ===================================================== 00:10:42.702 NVMe Controller at PCI bus 0, device 19, function 0 00:10:42.702 ===================================================== 00:10:42.702 Reservations: Not Supported 00:10:42.702 ===================================================== 00:10:42.702 NVMe Controller at PCI bus 0, device 18, function 0 00:10:42.702 ===================================================== 00:10:42.702 Reservations: Not Supported 00:10:42.702 Reservation test passed 00:10:42.702 00:10:42.702 real 0m0.284s 00:10:42.702 user 0m0.109s 00:10:42.702 sys 0m0.132s 00:10:42.702 ************************************ 00:10:42.702 END TEST nvme_reserve 00:10:42.702 ************************************ 00:10:42.702 09:59:32 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:42.702 09:59:32 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:10:42.702 09:59:32 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:42.702 09:59:32 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:42.702 09:59:32 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:42.702 09:59:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:42.702 ************************************ 00:10:42.702 START TEST nvme_err_injection 00:10:42.702 ************************************ 00:10:42.702 09:59:32 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:43.271 NVMe Error Injection test 00:10:43.271 Attached to 0000:00:10.0 00:10:43.271 Attached to 0000:00:11.0 00:10:43.271 Attached to 0000:00:13.0 00:10:43.271 Attached to 0000:00:12.0 00:10:43.271 0000:00:10.0: get features failed as expected 00:10:43.271 0000:00:11.0: get features failed as expected 00:10:43.271 0000:00:13.0: get features failed as expected 00:10:43.271 0000:00:12.0: get features failed as expected 00:10:43.271 0000:00:10.0: get features successfully as expected 00:10:43.271 0000:00:11.0: get features successfully as expected 00:10:43.271 0000:00:13.0: get features successfully as expected 00:10:43.271 0000:00:12.0: get features successfully as expected 00:10:43.271 0000:00:10.0: read failed as expected 00:10:43.271 0000:00:11.0: read failed as expected 00:10:43.271 0000:00:13.0: read failed as expected 00:10:43.271 0000:00:12.0: read failed as expected 00:10:43.271 0000:00:10.0: read successfully as expected 00:10:43.271 0000:00:11.0: read successfully as expected 00:10:43.271 0000:00:13.0: read successfully as expected 00:10:43.271 0000:00:12.0: read successfully as expected 00:10:43.271 Cleaning up... 00:10:43.271 00:10:43.271 real 0m0.310s 00:10:43.271 user 0m0.122s 00:10:43.271 sys 0m0.133s 00:10:43.271 ************************************ 00:10:43.271 END TEST nvme_err_injection 00:10:43.271 ************************************ 00:10:43.271 09:59:32 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:43.271 09:59:32 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:10:43.271 09:59:32 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:43.271 09:59:32 nvme -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:10:43.271 09:59:32 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:43.271 09:59:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:43.271 ************************************ 00:10:43.271 START TEST nvme_overhead 00:10:43.271 ************************************ 00:10:43.271 09:59:32 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:44.648 Initializing NVMe Controllers 00:10:44.648 Attached to 0000:00:10.0 00:10:44.648 Attached to 0000:00:11.0 00:10:44.648 Attached to 0000:00:13.0 00:10:44.648 Attached to 0000:00:12.0 00:10:44.648 Initialization complete. Launching workers. 00:10:44.648 submit (in ns) avg, min, max = 15210.2, 12895.5, 106260.9 00:10:44.648 complete (in ns) avg, min, max = 10313.4, 9459.1, 108191.8 00:10:44.648 00:10:44.648 Submit histogram 00:10:44.648 ================ 00:10:44.648 Range in us Cumulative Count 00:10:44.648 12.858 - 12.916: 0.0094% ( 1) 00:10:44.648 14.138 - 14.196: 0.0188% ( 1) 00:10:44.648 14.255 - 14.313: 0.0565% ( 4) 00:10:44.648 14.313 - 14.371: 0.3389% ( 30) 00:10:44.648 14.371 - 14.429: 1.2707% ( 99) 00:10:44.648 14.429 - 14.487: 4.8287% ( 378) 00:10:44.648 14.487 - 14.545: 12.2741% ( 791) 00:10:44.648 14.545 - 14.604: 24.4823% ( 1297) 00:10:44.648 14.604 - 14.662: 37.8389% ( 1419) 00:10:44.648 14.662 - 14.720: 51.1201% ( 1411) 00:10:44.648 14.720 - 14.778: 61.6529% ( 1119) 00:10:44.648 14.778 - 14.836: 68.4394% ( 721) 00:10:44.648 14.836 - 14.895: 72.5527% ( 437) 00:10:44.648 14.895 - 15.011: 76.9861% ( 471) 00:10:44.648 15.011 - 15.127: 80.2428% ( 346) 00:10:44.648 15.127 - 15.244: 83.3020% ( 325) 00:10:44.648 15.244 - 15.360: 85.5610% ( 240) 00:10:44.648 15.360 - 15.476: 87.5471% ( 211) 00:10:44.648 15.476 - 15.593: 89.1849% ( 174) 00:10:44.648 15.593 - 15.709: 90.8321% ( 175) 00:10:44.648 15.709 - 15.825: 92.4134% ( 168) 00:10:44.648 15.825 - 15.942: 93.2605% ( 90) 00:10:44.648 15.942 - 16.058: 93.8159% ( 59) 00:10:44.648 16.058 - 16.175: 94.1359% ( 34) 00:10:44.648 16.175 - 16.291: 94.3995% ( 28) 00:10:44.648 16.291 - 16.407: 94.5501% ( 16) 00:10:44.648 16.407 - 16.524: 94.7007% ( 16) 00:10:44.648 16.524 - 16.640: 94.8230% ( 13) 00:10:44.648 16.640 - 16.756: 94.9736% ( 16) 00:10:44.648 16.756 - 16.873: 95.0489% ( 8) 00:10:44.648 16.873 - 16.989: 95.1148% ( 7) 00:10:44.648 16.989 - 17.105: 95.1525% ( 4) 00:10:44.648 17.105 - 17.222: 95.1807% ( 3) 00:10:44.648 17.222 - 17.338: 95.2184% ( 4) 00:10:44.648 17.338 - 17.455: 95.2560% ( 4) 00:10:44.648 17.455 - 17.571: 95.2748% ( 2) 00:10:44.648 17.571 - 17.687: 95.2843% ( 1) 00:10:44.648 17.687 - 17.804: 95.2937% ( 1) 00:10:44.648 17.804 - 17.920: 95.3125% ( 2) 00:10:44.648 17.920 - 18.036: 95.3219% ( 1) 00:10:44.648 18.153 - 18.269: 95.3407% ( 2) 00:10:44.648 18.269 - 18.385: 95.3502% ( 1) 00:10:44.648 18.385 - 18.502: 95.3690% ( 2) 00:10:44.648 18.502 - 18.618: 95.3878% ( 2) 00:10:44.648 18.735 - 18.851: 95.3972% ( 1) 00:10:44.648 18.851 - 18.967: 95.4066% ( 1) 00:10:44.648 18.967 - 19.084: 95.4255% ( 2) 00:10:44.648 19.084 - 19.200: 95.4537% ( 3) 00:10:44.648 19.200 - 19.316: 95.4725% ( 2) 00:10:44.648 19.316 - 19.433: 95.4819% ( 1) 00:10:44.648 19.433 - 19.549: 95.5008% ( 2) 00:10:44.648 19.549 - 19.665: 95.5478% ( 5) 00:10:44.648 19.665 - 19.782: 95.5666% ( 2) 00:10:44.648 19.782 - 19.898: 95.5949% ( 3) 00:10:44.648 19.898 - 20.015: 95.6137% ( 2) 00:10:44.648 20.015 - 20.131: 95.6325% ( 2) 00:10:44.648 20.131 - 20.247: 95.6419% ( 1) 00:10:44.648 20.247 - 20.364: 95.6702% ( 3) 00:10:44.648 20.364 - 20.480: 95.6796% ( 1) 00:10:44.648 20.480 - 20.596: 95.7267% ( 5) 00:10:44.648 20.596 - 20.713: 95.8302% ( 11) 00:10:44.648 20.713 - 20.829: 95.9996% ( 18) 00:10:44.648 20.829 - 20.945: 96.1596% ( 17) 00:10:44.648 20.945 - 21.062: 96.3385% ( 19) 00:10:44.648 21.062 - 21.178: 96.5456% ( 22) 00:10:44.648 21.178 - 21.295: 96.8373% ( 31) 00:10:44.648 21.295 - 21.411: 97.0350% ( 21) 00:10:44.648 21.411 - 21.527: 97.2421% ( 22) 00:10:44.648 21.527 - 21.644: 97.3739% ( 14) 00:10:44.648 21.644 - 21.760: 97.5056% ( 14) 00:10:44.648 21.760 - 21.876: 97.5998% ( 10) 00:10:44.648 21.876 - 21.993: 97.7316% ( 14) 00:10:44.648 21.993 - 22.109: 97.7974% ( 7) 00:10:44.648 22.109 - 22.225: 97.9104% ( 12) 00:10:44.648 22.225 - 22.342: 98.0328% ( 13) 00:10:44.648 22.342 - 22.458: 98.1834% ( 16) 00:10:44.648 22.458 - 22.575: 98.2869% ( 11) 00:10:44.648 22.575 - 22.691: 98.3434% ( 6) 00:10:44.648 22.691 - 22.807: 98.4093% ( 7) 00:10:44.648 22.807 - 22.924: 98.5034% ( 10) 00:10:44.648 22.924 - 23.040: 98.6352% ( 14) 00:10:44.648 23.040 - 23.156: 98.6822% ( 5) 00:10:44.648 23.156 - 23.273: 98.7575% ( 8) 00:10:44.648 23.273 - 23.389: 98.8140% ( 6) 00:10:44.648 23.389 - 23.505: 98.8893% ( 8) 00:10:44.648 23.505 - 23.622: 98.9364% ( 5) 00:10:44.648 23.622 - 23.738: 98.9740% ( 4) 00:10:44.648 23.738 - 23.855: 99.0211% ( 5) 00:10:44.648 23.855 - 23.971: 99.0776% ( 6) 00:10:44.648 23.971 - 24.087: 99.1340% ( 6) 00:10:44.648 24.087 - 24.204: 99.1623% ( 3) 00:10:44.648 24.204 - 24.320: 99.1905% ( 3) 00:10:44.648 24.320 - 24.436: 99.2564% ( 7) 00:10:44.648 24.553 - 24.669: 99.2846% ( 3) 00:10:44.648 24.669 - 24.785: 99.3599% ( 8) 00:10:44.648 24.785 - 24.902: 99.4258% ( 7) 00:10:44.648 24.902 - 25.018: 99.4352% ( 1) 00:10:44.648 25.018 - 25.135: 99.4635% ( 3) 00:10:44.648 25.135 - 25.251: 99.4917% ( 3) 00:10:44.648 25.251 - 25.367: 99.5294% ( 4) 00:10:44.648 25.484 - 25.600: 99.5388% ( 1) 00:10:44.648 25.716 - 25.833: 99.5576% ( 2) 00:10:44.648 25.949 - 26.065: 99.5764% ( 2) 00:10:44.648 26.065 - 26.182: 99.6047% ( 3) 00:10:44.648 26.182 - 26.298: 99.6141% ( 1) 00:10:44.648 26.415 - 26.531: 99.6423% ( 3) 00:10:44.648 26.531 - 26.647: 99.6611% ( 2) 00:10:44.648 26.647 - 26.764: 99.6800% ( 2) 00:10:44.648 26.880 - 26.996: 99.6894% ( 1) 00:10:44.648 26.996 - 27.113: 99.7082% ( 2) 00:10:44.648 27.113 - 27.229: 99.7176% ( 1) 00:10:44.648 27.578 - 27.695: 99.7270% ( 1) 00:10:44.648 27.695 - 27.811: 99.7364% ( 1) 00:10:44.648 27.811 - 27.927: 99.7459% ( 1) 00:10:44.648 28.276 - 28.393: 99.7647% ( 2) 00:10:44.648 28.509 - 28.625: 99.7741% ( 1) 00:10:44.648 28.742 - 28.858: 99.7835% ( 1) 00:10:44.648 29.324 - 29.440: 99.7929% ( 1) 00:10:44.648 29.556 - 29.673: 99.8306% ( 4) 00:10:44.648 29.673 - 29.789: 99.8400% ( 1) 00:10:44.648 29.789 - 30.022: 99.8494% ( 1) 00:10:44.648 30.255 - 30.487: 99.8588% ( 1) 00:10:44.648 30.953 - 31.185: 99.8682% ( 1) 00:10:44.648 31.418 - 31.651: 99.8776% ( 1) 00:10:44.648 32.116 - 32.349: 99.8870% ( 1) 00:10:44.648 32.582 - 32.815: 99.8965% ( 1) 00:10:44.648 33.978 - 34.211: 99.9059% ( 1) 00:10:44.648 34.909 - 35.142: 99.9153% ( 1) 00:10:44.648 35.142 - 35.375: 99.9435% ( 3) 00:10:44.648 35.607 - 35.840: 99.9529% ( 1) 00:10:44.648 36.073 - 36.305: 99.9623% ( 1) 00:10:44.648 38.865 - 39.098: 99.9718% ( 1) 00:10:44.648 56.087 - 56.320: 99.9812% ( 1) 00:10:44.648 57.716 - 57.949: 99.9906% ( 1) 00:10:44.648 106.124 - 106.589: 100.0000% ( 1) 00:10:44.648 00:10:44.648 Complete histogram 00:10:44.648 ================== 00:10:44.648 Range in us Cumulative Count 00:10:44.648 9.425 - 9.484: 0.0094% ( 1) 00:10:44.648 9.542 - 9.600: 0.0377% ( 3) 00:10:44.648 9.600 - 9.658: 0.1694% ( 14) 00:10:44.648 9.658 - 9.716: 2.2496% ( 221) 00:10:44.648 9.716 - 9.775: 10.9752% ( 927) 00:10:44.648 9.775 - 9.833: 27.5791% ( 1764) 00:10:44.648 9.833 - 9.891: 48.0422% ( 2174) 00:10:44.648 9.891 - 9.949: 63.7142% ( 1665) 00:10:44.648 9.949 - 10.007: 73.2963% ( 1018) 00:10:44.648 10.007 - 10.065: 78.5109% ( 554) 00:10:44.648 10.065 - 10.124: 80.6288% ( 225) 00:10:44.648 10.124 - 10.182: 81.8430% ( 129) 00:10:44.648 10.182 - 10.240: 82.6901% ( 90) 00:10:44.648 10.240 - 10.298: 83.2172% ( 56) 00:10:44.648 10.298 - 10.356: 83.8573% ( 68) 00:10:44.648 10.356 - 10.415: 84.6668% ( 86) 00:10:44.648 10.415 - 10.473: 85.4857% ( 87) 00:10:44.648 10.473 - 10.531: 86.5870% ( 117) 00:10:44.648 10.531 - 10.589: 87.7259% ( 121) 00:10:44.648 10.589 - 10.647: 88.8178% ( 116) 00:10:44.648 10.647 - 10.705: 89.6931% ( 93) 00:10:44.648 10.705 - 10.764: 90.3238% ( 67) 00:10:44.648 10.764 - 10.822: 90.8509% ( 56) 00:10:44.648 10.822 - 10.880: 91.0297% ( 19) 00:10:44.648 10.880 - 10.938: 91.2933% ( 28) 00:10:44.648 10.938 - 10.996: 91.3686% ( 8) 00:10:44.648 10.996 - 11.055: 91.5004% ( 14) 00:10:44.648 11.055 - 11.113: 91.5380% ( 4) 00:10:44.648 11.113 - 11.171: 91.5945% ( 6) 00:10:44.648 11.171 - 11.229: 91.7263% ( 14) 00:10:44.648 11.229 - 11.287: 91.8110% ( 9) 00:10:44.648 11.287 - 11.345: 91.9145% ( 11) 00:10:44.648 11.345 - 11.404: 92.0181% ( 11) 00:10:44.649 11.404 - 11.462: 92.0840% ( 7) 00:10:44.649 11.462 - 11.520: 92.1498% ( 7) 00:10:44.649 11.520 - 11.578: 92.2534% ( 11) 00:10:44.649 11.578 - 11.636: 92.3569% ( 11) 00:10:44.649 11.636 - 11.695: 92.4793% ( 13) 00:10:44.649 11.695 - 11.753: 92.6487% ( 18) 00:10:44.649 11.753 - 11.811: 92.8276% ( 19) 00:10:44.649 11.811 - 11.869: 93.0629% ( 25) 00:10:44.649 11.869 - 11.927: 93.5053% ( 47) 00:10:44.649 11.927 - 11.985: 93.9383% ( 46) 00:10:44.649 11.985 - 12.044: 94.4089% ( 50) 00:10:44.649 12.044 - 12.102: 94.7289% ( 34) 00:10:44.649 12.102 - 12.160: 95.1901% ( 49) 00:10:44.649 12.160 - 12.218: 95.5572% ( 39) 00:10:44.649 12.218 - 12.276: 95.8773% ( 34) 00:10:44.649 12.276 - 12.335: 96.2161% ( 36) 00:10:44.649 12.335 - 12.393: 96.4232% ( 22) 00:10:44.649 12.393 - 12.451: 96.5832% ( 17) 00:10:44.649 12.451 - 12.509: 96.6679% ( 9) 00:10:44.649 12.509 - 12.567: 96.7244% ( 6) 00:10:44.649 12.567 - 12.625: 96.7903% ( 7) 00:10:44.649 12.625 - 12.684: 96.8562% ( 7) 00:10:44.649 12.684 - 12.742: 96.9127% ( 6) 00:10:44.649 12.742 - 12.800: 96.9597% ( 5) 00:10:44.649 12.800 - 12.858: 96.9974% ( 4) 00:10:44.649 12.858 - 12.916: 97.0350% ( 4) 00:10:44.649 12.916 - 12.975: 97.0444% ( 1) 00:10:44.649 12.975 - 13.033: 97.0633% ( 2) 00:10:44.649 13.091 - 13.149: 97.0821% ( 2) 00:10:44.649 13.149 - 13.207: 97.1009% ( 2) 00:10:44.649 13.207 - 13.265: 97.1103% ( 1) 00:10:44.649 13.265 - 13.324: 97.1197% ( 1) 00:10:44.649 13.324 - 13.382: 97.1291% ( 1) 00:10:44.649 13.382 - 13.440: 97.1480% ( 2) 00:10:44.649 13.498 - 13.556: 97.1762% ( 3) 00:10:44.649 13.673 - 13.731: 97.1856% ( 1) 00:10:44.649 13.789 - 13.847: 97.2044% ( 2) 00:10:44.649 13.847 - 13.905: 97.2421% ( 4) 00:10:44.649 13.905 - 13.964: 97.2515% ( 1) 00:10:44.649 13.964 - 14.022: 97.2609% ( 1) 00:10:44.649 14.022 - 14.080: 97.2703% ( 1) 00:10:44.649 14.138 - 14.196: 97.2797% ( 1) 00:10:44.649 14.196 - 14.255: 97.3080% ( 3) 00:10:44.649 14.255 - 14.313: 97.3174% ( 1) 00:10:44.649 14.371 - 14.429: 97.3362% ( 2) 00:10:44.649 14.429 - 14.487: 97.3456% ( 1) 00:10:44.649 14.487 - 14.545: 97.3550% ( 1) 00:10:44.649 14.545 - 14.604: 97.3645% ( 1) 00:10:44.649 14.662 - 14.720: 97.3833% ( 2) 00:10:44.649 14.720 - 14.778: 97.4021% ( 2) 00:10:44.649 14.778 - 14.836: 97.4398% ( 4) 00:10:44.649 14.895 - 15.011: 97.4586% ( 2) 00:10:44.649 15.011 - 15.127: 97.4868% ( 3) 00:10:44.649 15.127 - 15.244: 97.4962% ( 1) 00:10:44.649 15.244 - 15.360: 97.5151% ( 2) 00:10:44.649 15.360 - 15.476: 97.5339% ( 2) 00:10:44.649 15.476 - 15.593: 97.5621% ( 3) 00:10:44.649 15.593 - 15.709: 97.6280% ( 7) 00:10:44.649 15.709 - 15.825: 97.6939% ( 7) 00:10:44.649 15.825 - 15.942: 97.8257% ( 14) 00:10:44.649 15.942 - 16.058: 97.8916% ( 7) 00:10:44.649 16.058 - 16.175: 97.9575% ( 7) 00:10:44.649 16.175 - 16.291: 97.9763% ( 2) 00:10:44.649 16.291 - 16.407: 98.0328% ( 6) 00:10:44.649 16.407 - 16.524: 98.0892% ( 6) 00:10:44.649 16.524 - 16.640: 98.1081% ( 2) 00:10:44.649 16.640 - 16.756: 98.2116% ( 11) 00:10:44.649 16.756 - 16.873: 98.2398% ( 3) 00:10:44.649 16.873 - 16.989: 98.3340% ( 10) 00:10:44.649 16.989 - 17.105: 98.3904% ( 6) 00:10:44.649 17.105 - 17.222: 98.4940% ( 11) 00:10:44.649 17.222 - 17.338: 98.5599% ( 7) 00:10:44.649 17.338 - 17.455: 98.6634% ( 11) 00:10:44.649 17.455 - 17.571: 98.7575% ( 10) 00:10:44.649 17.571 - 17.687: 98.8422% ( 9) 00:10:44.649 17.687 - 17.804: 98.8893% ( 5) 00:10:44.649 17.804 - 17.920: 98.9270% ( 4) 00:10:44.649 17.920 - 18.036: 99.0305% ( 11) 00:10:44.649 18.036 - 18.153: 99.0681% ( 4) 00:10:44.649 18.153 - 18.269: 99.1058% ( 4) 00:10:44.649 18.269 - 18.385: 99.1246% ( 2) 00:10:44.649 18.385 - 18.502: 99.1434% ( 2) 00:10:44.649 18.502 - 18.618: 99.1529% ( 1) 00:10:44.649 18.618 - 18.735: 99.2188% ( 7) 00:10:44.649 18.735 - 18.851: 99.2282% ( 1) 00:10:44.649 18.851 - 18.967: 99.2941% ( 7) 00:10:44.649 18.967 - 19.084: 99.4164% ( 13) 00:10:44.649 19.084 - 19.200: 99.4258% ( 1) 00:10:44.649 19.200 - 19.316: 99.4541% ( 3) 00:10:44.649 19.316 - 19.433: 99.4917% ( 4) 00:10:44.649 19.433 - 19.549: 99.5200% ( 3) 00:10:44.649 19.549 - 19.665: 99.5388% ( 2) 00:10:44.649 19.665 - 19.782: 99.5858% ( 5) 00:10:44.649 19.782 - 19.898: 99.6329% ( 5) 00:10:44.649 19.898 - 20.015: 99.6423% ( 1) 00:10:44.649 20.015 - 20.131: 99.6611% ( 2) 00:10:44.649 20.131 - 20.247: 99.6800% ( 2) 00:10:44.649 20.247 - 20.364: 99.6894% ( 1) 00:10:44.649 20.596 - 20.713: 99.6988% ( 1) 00:10:44.649 20.829 - 20.945: 99.7082% ( 1) 00:10:44.649 20.945 - 21.062: 99.7176% ( 1) 00:10:44.649 21.062 - 21.178: 99.7270% ( 1) 00:10:44.649 21.411 - 21.527: 99.7553% ( 3) 00:10:44.649 22.109 - 22.225: 99.7647% ( 1) 00:10:44.649 22.458 - 22.575: 99.7835% ( 2) 00:10:44.649 22.575 - 22.691: 99.7929% ( 1) 00:10:44.649 22.807 - 22.924: 99.8023% ( 1) 00:10:44.649 23.389 - 23.505: 99.8117% ( 1) 00:10:44.649 23.855 - 23.971: 99.8212% ( 1) 00:10:44.649 23.971 - 24.087: 99.8306% ( 1) 00:10:44.649 24.087 - 24.204: 99.8400% ( 1) 00:10:44.649 24.902 - 25.018: 99.8494% ( 1) 00:10:44.649 25.251 - 25.367: 99.8588% ( 1) 00:10:44.649 25.949 - 26.065: 99.8682% ( 1) 00:10:44.649 26.065 - 26.182: 99.8776% ( 1) 00:10:44.649 26.182 - 26.298: 99.8870% ( 1) 00:10:44.649 26.298 - 26.415: 99.9059% ( 2) 00:10:44.649 27.345 - 27.462: 99.9153% ( 1) 00:10:44.649 27.578 - 27.695: 99.9247% ( 1) 00:10:44.649 29.789 - 30.022: 99.9341% ( 1) 00:10:44.649 33.280 - 33.513: 99.9435% ( 1) 00:10:44.649 33.745 - 33.978: 99.9529% ( 1) 00:10:44.649 35.375 - 35.607: 99.9623% ( 1) 00:10:44.649 43.520 - 43.753: 99.9718% ( 1) 00:10:44.649 66.095 - 66.560: 99.9812% ( 1) 00:10:44.649 104.727 - 105.193: 99.9906% ( 1) 00:10:44.649 107.985 - 108.451: 100.0000% ( 1) 00:10:44.649 00:10:44.649 00:10:44.649 real 0m1.300s 00:10:44.649 user 0m1.104s 00:10:44.649 sys 0m0.135s 00:10:44.649 09:59:33 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:44.649 09:59:33 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:10:44.649 ************************************ 00:10:44.649 END TEST nvme_overhead 00:10:44.649 ************************************ 00:10:44.649 09:59:33 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:44.649 09:59:33 nvme -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:10:44.649 09:59:33 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:44.649 09:59:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:44.649 ************************************ 00:10:44.649 START TEST nvme_arbitration 00:10:44.649 ************************************ 00:10:44.649 09:59:33 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:47.935 Initializing NVMe Controllers 00:10:47.935 Attached to 0000:00:10.0 00:10:47.935 Attached to 0000:00:11.0 00:10:47.935 Attached to 0000:00:13.0 00:10:47.935 Attached to 0000:00:12.0 00:10:47.935 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:47.935 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:47.935 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:47.935 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:47.935 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:47.935 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:47.935 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:47.935 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:47.935 Initialization complete. Launching workers. 00:10:47.935 Starting thread on core 1 with urgent priority queue 00:10:47.935 Starting thread on core 2 with urgent priority queue 00:10:47.935 Starting thread on core 3 with urgent priority queue 00:10:47.935 Starting thread on core 0 with urgent priority queue 00:10:47.935 QEMU NVMe Ctrl (12340 ) core 0: 682.67 IO/s 146.48 secs/100000 ios 00:10:47.935 QEMU NVMe Ctrl (12342 ) core 0: 682.67 IO/s 146.48 secs/100000 ios 00:10:47.935 QEMU NVMe Ctrl (12341 ) core 1: 618.67 IO/s 161.64 secs/100000 ios 00:10:47.935 QEMU NVMe Ctrl (12342 ) core 1: 618.67 IO/s 161.64 secs/100000 ios 00:10:47.935 QEMU NVMe Ctrl (12343 ) core 2: 640.00 IO/s 156.25 secs/100000 ios 00:10:47.935 QEMU NVMe Ctrl (12342 ) core 3: 789.33 IO/s 126.69 secs/100000 ios 00:10:47.935 ======================================================== 00:10:47.935 00:10:47.935 ************************************ 00:10:47.935 END TEST nvme_arbitration 00:10:47.935 ************************************ 00:10:47.935 00:10:47.935 real 0m3.402s 00:10:47.935 user 0m9.361s 00:10:47.935 sys 0m0.150s 00:10:47.935 09:59:37 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:47.935 09:59:37 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:47.935 09:59:37 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:47.935 09:59:37 nvme -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:10:47.935 09:59:37 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:47.935 09:59:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:47.935 ************************************ 00:10:47.935 START TEST nvme_single_aen 00:10:47.935 ************************************ 00:10:47.935 09:59:37 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:48.193 Asynchronous Event Request test 00:10:48.193 Attached to 0000:00:10.0 00:10:48.193 Attached to 0000:00:11.0 00:10:48.193 Attached to 0000:00:13.0 00:10:48.193 Attached to 0000:00:12.0 00:10:48.193 Reset controller to setup AER completions for this process 00:10:48.193 Registering asynchronous event callbacks... 00:10:48.193 Getting orig temperature thresholds of all controllers 00:10:48.193 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:48.193 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:48.193 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:48.193 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:48.193 Setting all controllers temperature threshold low to trigger AER 00:10:48.193 Waiting for all controllers temperature threshold to be set lower 00:10:48.193 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:48.193 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:48.193 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:48.193 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:48.193 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:48.193 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:48.193 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:48.193 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:48.193 Waiting for all controllers to trigger AER and reset threshold 00:10:48.193 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:48.193 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:48.194 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:48.194 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:48.194 Cleaning up... 00:10:48.194 00:10:48.194 real 0m0.263s 00:10:48.194 user 0m0.104s 00:10:48.194 sys 0m0.117s 00:10:48.194 ************************************ 00:10:48.194 END TEST nvme_single_aen 00:10:48.194 ************************************ 00:10:48.194 09:59:37 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:48.194 09:59:37 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:48.194 09:59:37 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:48.194 09:59:37 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:48.194 09:59:37 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:48.194 09:59:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:48.194 ************************************ 00:10:48.194 START TEST nvme_doorbell_aers 00:10:48.194 ************************************ 00:10:48.194 09:59:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # nvme_doorbell_aers 00:10:48.194 09:59:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:48.194 09:59:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:48.194 09:59:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:48.194 09:59:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:48.194 09:59:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1512 -- # bdfs=() 00:10:48.194 09:59:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1512 -- # local bdfs 00:10:48.194 09:59:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:48.194 09:59:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:48.194 09:59:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:10:48.452 09:59:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # (( 4 == 0 )) 00:10:48.452 09:59:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:48.452 09:59:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:48.452 09:59:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:48.711 [2024-06-10 09:59:37.975023] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69820) is not found. Dropping the request. 00:10:58.714 Executing: test_write_invalid_db 00:10:58.714 Waiting for AER completion... 00:10:58.714 Failure: test_write_invalid_db 00:10:58.714 00:10:58.714 Executing: test_invalid_db_write_overflow_sq 00:10:58.714 Waiting for AER completion... 00:10:58.714 Failure: test_invalid_db_write_overflow_sq 00:10:58.714 00:10:58.714 Executing: test_invalid_db_write_overflow_cq 00:10:58.714 Waiting for AER completion... 00:10:58.714 Failure: test_invalid_db_write_overflow_cq 00:10:58.714 00:10:58.714 09:59:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:58.714 09:59:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:58.715 [2024-06-10 09:59:48.024191] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69820) is not found. Dropping the request. 00:11:08.681 Executing: test_write_invalid_db 00:11:08.681 Waiting for AER completion... 00:11:08.681 Failure: test_write_invalid_db 00:11:08.681 00:11:08.681 Executing: test_invalid_db_write_overflow_sq 00:11:08.681 Waiting for AER completion... 00:11:08.681 Failure: test_invalid_db_write_overflow_sq 00:11:08.681 00:11:08.681 Executing: test_invalid_db_write_overflow_cq 00:11:08.681 Waiting for AER completion... 00:11:08.681 Failure: test_invalid_db_write_overflow_cq 00:11:08.681 00:11:08.681 09:59:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:08.681 09:59:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:08.681 [2024-06-10 09:59:58.067872] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69820) is not found. Dropping the request. 00:11:18.658 Executing: test_write_invalid_db 00:11:18.658 Waiting for AER completion... 00:11:18.658 Failure: test_write_invalid_db 00:11:18.658 00:11:18.658 Executing: test_invalid_db_write_overflow_sq 00:11:18.658 Waiting for AER completion... 00:11:18.658 Failure: test_invalid_db_write_overflow_sq 00:11:18.658 00:11:18.659 Executing: test_invalid_db_write_overflow_cq 00:11:18.659 Waiting for AER completion... 00:11:18.659 Failure: test_invalid_db_write_overflow_cq 00:11:18.659 00:11:18.659 10:00:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:18.659 10:00:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:18.659 [2024-06-10 10:00:08.103841] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69820) is not found. Dropping the request. 00:11:28.662 Executing: test_write_invalid_db 00:11:28.662 Waiting for AER completion... 00:11:28.662 Failure: test_write_invalid_db 00:11:28.662 00:11:28.662 Executing: test_invalid_db_write_overflow_sq 00:11:28.662 Waiting for AER completion... 00:11:28.662 Failure: test_invalid_db_write_overflow_sq 00:11:28.662 00:11:28.662 Executing: test_invalid_db_write_overflow_cq 00:11:28.662 Waiting for AER completion... 00:11:28.662 Failure: test_invalid_db_write_overflow_cq 00:11:28.662 00:11:28.662 ************************************ 00:11:28.662 END TEST nvme_doorbell_aers 00:11:28.662 ************************************ 00:11:28.662 00:11:28.662 real 0m40.247s 00:11:28.662 user 0m33.769s 00:11:28.662 sys 0m6.130s 00:11:28.662 10:00:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:28.662 10:00:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:11:28.662 10:00:17 nvme -- nvme/nvme.sh@97 -- # uname 00:11:28.662 10:00:17 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:28.662 10:00:17 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:28.662 10:00:17 nvme -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:11:28.662 10:00:17 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:28.662 10:00:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:28.662 ************************************ 00:11:28.662 START TEST nvme_multi_aen 00:11:28.662 ************************************ 00:11:28.662 10:00:17 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:28.920 [2024-06-10 10:00:18.217105] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69820) is not found. Dropping the request. 00:11:28.920 [2024-06-10 10:00:18.217231] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69820) is not found. Dropping the request. 00:11:28.920 [2024-06-10 10:00:18.217265] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69820) is not found. Dropping the request. 00:11:28.920 [2024-06-10 10:00:18.219499] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69820) is not found. Dropping the request. 00:11:28.920 [2024-06-10 10:00:18.219553] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69820) is not found. Dropping the request. 00:11:28.920 [2024-06-10 10:00:18.219579] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69820) is not found. Dropping the request. 00:11:28.920 [2024-06-10 10:00:18.221084] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69820) is not found. Dropping the request. 00:11:28.920 [2024-06-10 10:00:18.221138] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69820) is not found. Dropping the request. 00:11:28.920 [2024-06-10 10:00:18.221164] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69820) is not found. Dropping the request. 00:11:28.920 [2024-06-10 10:00:18.222618] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69820) is not found. Dropping the request. 00:11:28.920 [2024-06-10 10:00:18.222694] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69820) is not found. Dropping the request. 00:11:28.920 [2024-06-10 10:00:18.222722] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69820) is not found. Dropping the request. 00:11:28.920 Child process pid: 70337 00:11:29.177 [Child] Asynchronous Event Request test 00:11:29.177 [Child] Attached to 0000:00:10.0 00:11:29.177 [Child] Attached to 0000:00:11.0 00:11:29.177 [Child] Attached to 0000:00:13.0 00:11:29.177 [Child] Attached to 0000:00:12.0 00:11:29.177 [Child] Registering asynchronous event callbacks... 00:11:29.177 [Child] Getting orig temperature thresholds of all controllers 00:11:29.177 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:29.177 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:29.177 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:29.177 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:29.177 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:29.177 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:29.177 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:29.177 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:29.177 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:29.177 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:29.177 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:29.177 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:29.177 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:29.177 [Child] Cleaning up... 00:11:29.177 Asynchronous Event Request test 00:11:29.177 Attached to 0000:00:10.0 00:11:29.177 Attached to 0000:00:11.0 00:11:29.177 Attached to 0000:00:13.0 00:11:29.177 Attached to 0000:00:12.0 00:11:29.177 Reset controller to setup AER completions for this process 00:11:29.177 Registering asynchronous event callbacks... 00:11:29.177 Getting orig temperature thresholds of all controllers 00:11:29.177 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:29.177 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:29.177 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:29.177 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:29.177 Setting all controllers temperature threshold low to trigger AER 00:11:29.177 Waiting for all controllers temperature threshold to be set lower 00:11:29.177 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:29.177 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:29.177 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:29.177 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:29.177 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:29.177 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:29.177 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:29.177 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:29.177 Waiting for all controllers to trigger AER and reset threshold 00:11:29.177 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:29.177 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:29.177 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:29.177 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:29.177 Cleaning up... 00:11:29.177 ************************************ 00:11:29.177 END TEST nvme_multi_aen 00:11:29.177 ************************************ 00:11:29.177 00:11:29.177 real 0m0.570s 00:11:29.177 user 0m0.208s 00:11:29.177 sys 0m0.245s 00:11:29.177 10:00:18 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:29.177 10:00:18 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:11:29.177 10:00:18 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:29.177 10:00:18 nvme -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:11:29.178 10:00:18 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:29.178 10:00:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:29.178 ************************************ 00:11:29.178 START TEST nvme_startup 00:11:29.178 ************************************ 00:11:29.178 10:00:18 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:29.436 Initializing NVMe Controllers 00:11:29.436 Attached to 0000:00:10.0 00:11:29.436 Attached to 0000:00:11.0 00:11:29.436 Attached to 0000:00:13.0 00:11:29.436 Attached to 0000:00:12.0 00:11:29.436 Initialization complete. 00:11:29.436 Time used:200492.875 (us). 00:11:29.436 ************************************ 00:11:29.436 END TEST nvme_startup 00:11:29.436 ************************************ 00:11:29.436 00:11:29.436 real 0m0.292s 00:11:29.436 user 0m0.122s 00:11:29.436 sys 0m0.126s 00:11:29.436 10:00:18 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:29.436 10:00:18 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:11:29.436 10:00:18 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:29.436 10:00:18 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:29.436 10:00:18 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:29.436 10:00:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:29.436 ************************************ 00:11:29.436 START TEST nvme_multi_secondary 00:11:29.436 ************************************ 00:11:29.436 10:00:18 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # nvme_multi_secondary 00:11:29.436 10:00:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=70389 00:11:29.436 10:00:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:29.436 10:00:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=70390 00:11:29.436 10:00:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:29.436 10:00:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:33.626 Initializing NVMe Controllers 00:11:33.626 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:33.626 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:33.626 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:33.626 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:33.626 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:33.626 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:33.626 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:33.626 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:33.626 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:33.626 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:33.626 Initialization complete. Launching workers. 00:11:33.626 ======================================================== 00:11:33.626 Latency(us) 00:11:33.626 Device Information : IOPS MiB/s Average min max 00:11:33.626 PCIE (0000:00:10.0) NSID 1 from core 2: 2469.48 9.65 6476.79 1909.11 13088.64 00:11:33.626 PCIE (0000:00:11.0) NSID 1 from core 2: 2469.48 9.65 6486.54 1983.22 13459.79 00:11:33.626 PCIE (0000:00:13.0) NSID 1 from core 2: 2469.48 9.65 6487.03 1876.59 12411.51 00:11:33.626 PCIE (0000:00:12.0) NSID 1 from core 2: 2469.48 9.65 6487.01 1682.63 12553.79 00:11:33.626 PCIE (0000:00:12.0) NSID 2 from core 2: 2469.48 9.65 6487.05 1671.65 16632.82 00:11:33.626 PCIE (0000:00:12.0) NSID 3 from core 2: 2469.48 9.65 6486.33 1703.71 13557.21 00:11:33.626 ======================================================== 00:11:33.626 Total : 14816.87 57.88 6485.13 1671.65 16632.82 00:11:33.626 00:11:33.626 Initializing NVMe Controllers 00:11:33.626 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:33.626 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:33.626 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:33.626 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:33.626 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:33.626 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:33.626 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:33.626 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:33.626 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:33.626 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:33.626 Initialization complete. Launching workers. 00:11:33.626 ======================================================== 00:11:33.626 Latency(us) 00:11:33.626 Device Information : IOPS MiB/s Average min max 00:11:33.626 PCIE (0000:00:10.0) NSID 1 from core 1: 5372.65 20.99 2976.09 1496.78 6002.13 00:11:33.626 PCIE (0000:00:11.0) NSID 1 from core 1: 5372.65 20.99 2977.39 1378.82 5819.84 00:11:33.626 PCIE (0000:00:13.0) NSID 1 from core 1: 5372.65 20.99 2977.34 1365.83 5732.33 00:11:33.626 PCIE (0000:00:12.0) NSID 1 from core 1: 5372.65 20.99 2977.28 1332.09 5662.27 00:11:33.626 PCIE (0000:00:12.0) NSID 2 from core 1: 5372.65 20.99 2977.34 1365.88 5673.50 00:11:33.626 PCIE (0000:00:12.0) NSID 3 from core 1: 5372.65 20.99 2977.25 1369.44 6230.73 00:11:33.626 ======================================================== 00:11:33.626 Total : 32235.92 125.92 2977.12 1332.09 6230.73 00:11:33.626 00:11:33.626 10:00:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 70389 00:11:35.008 Initializing NVMe Controllers 00:11:35.008 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:35.008 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:35.008 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:35.008 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:35.008 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:35.008 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:35.008 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:35.008 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:35.008 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:35.008 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:35.008 Initialization complete. Launching workers. 00:11:35.008 ======================================================== 00:11:35.008 Latency(us) 00:11:35.008 Device Information : IOPS MiB/s Average min max 00:11:35.008 PCIE (0000:00:10.0) NSID 1 from core 0: 8289.44 32.38 1928.65 931.03 8705.74 00:11:35.008 PCIE (0000:00:11.0) NSID 1 from core 0: 8289.44 32.38 1929.67 962.96 8709.42 00:11:35.008 PCIE (0000:00:13.0) NSID 1 from core 0: 8289.44 32.38 1929.62 965.77 8997.16 00:11:35.008 PCIE (0000:00:12.0) NSID 1 from core 0: 8289.44 32.38 1929.57 955.17 9000.65 00:11:35.008 PCIE (0000:00:12.0) NSID 2 from core 0: 8292.64 32.39 1928.78 875.16 6492.33 00:11:35.008 PCIE (0000:00:12.0) NSID 3 from core 0: 8292.64 32.39 1928.75 848.16 6998.59 00:11:35.008 ======================================================== 00:11:35.008 Total : 49743.03 194.31 1929.17 848.16 9000.65 00:11:35.008 00:11:35.008 10:00:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 70390 00:11:35.008 10:00:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=70461 00:11:35.008 10:00:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:35.008 10:00:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=70462 00:11:35.008 10:00:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:35.008 10:00:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:38.304 Initializing NVMe Controllers 00:11:38.304 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:38.304 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:38.304 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:38.304 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:38.304 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:38.304 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:38.304 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:38.304 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:38.304 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:38.304 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:38.304 Initialization complete. Launching workers. 00:11:38.304 ======================================================== 00:11:38.304 Latency(us) 00:11:38.304 Device Information : IOPS MiB/s Average min max 00:11:38.304 PCIE (0000:00:10.0) NSID 1 from core 0: 5467.65 21.36 2924.48 1097.25 6229.00 00:11:38.304 PCIE (0000:00:11.0) NSID 1 from core 0: 5467.65 21.36 2925.80 1132.36 6445.82 00:11:38.304 PCIE (0000:00:13.0) NSID 1 from core 0: 5467.65 21.36 2925.92 1106.53 6318.34 00:11:38.304 PCIE (0000:00:12.0) NSID 1 from core 0: 5467.65 21.36 2925.92 1103.39 5618.87 00:11:38.305 PCIE (0000:00:12.0) NSID 2 from core 0: 5467.65 21.36 2926.10 1093.55 6098.28 00:11:38.305 PCIE (0000:00:12.0) NSID 3 from core 0: 5467.65 21.36 2926.19 1093.63 6366.91 00:11:38.305 ======================================================== 00:11:38.305 Total : 32805.89 128.15 2925.73 1093.55 6445.82 00:11:38.305 00:11:38.305 Initializing NVMe Controllers 00:11:38.305 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:38.305 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:38.305 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:38.305 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:38.305 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:38.305 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:38.305 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:38.305 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:38.305 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:38.305 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:38.305 Initialization complete. Launching workers. 00:11:38.305 ======================================================== 00:11:38.305 Latency(us) 00:11:38.305 Device Information : IOPS MiB/s Average min max 00:11:38.305 PCIE (0000:00:10.0) NSID 1 from core 1: 5730.64 22.39 2790.22 1009.87 5636.73 00:11:38.305 PCIE (0000:00:11.0) NSID 1 from core 1: 5730.64 22.39 2791.44 1037.33 5645.67 00:11:38.305 PCIE (0000:00:13.0) NSID 1 from core 1: 5730.64 22.39 2791.35 1043.76 5324.15 00:11:38.305 PCIE (0000:00:12.0) NSID 1 from core 1: 5730.64 22.39 2791.29 1046.02 5461.23 00:11:38.305 PCIE (0000:00:12.0) NSID 2 from core 1: 5730.64 22.39 2791.30 1034.38 5607.87 00:11:38.305 PCIE (0000:00:12.0) NSID 3 from core 1: 5730.64 22.39 2791.23 1022.07 5585.05 00:11:38.305 ======================================================== 00:11:38.305 Total : 34383.82 134.31 2791.14 1009.87 5645.67 00:11:38.305 00:11:40.209 Initializing NVMe Controllers 00:11:40.209 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:40.209 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:40.209 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:40.209 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:40.209 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:40.209 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:40.209 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:40.209 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:40.209 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:40.209 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:40.209 Initialization complete. Launching workers. 00:11:40.209 ======================================================== 00:11:40.209 Latency(us) 00:11:40.209 Device Information : IOPS MiB/s Average min max 00:11:40.209 PCIE (0000:00:10.0) NSID 1 from core 2: 3701.96 14.46 4319.63 993.31 16781.81 00:11:40.209 PCIE (0000:00:11.0) NSID 1 from core 2: 3701.96 14.46 4320.88 1009.49 12822.55 00:11:40.209 PCIE (0000:00:13.0) NSID 1 from core 2: 3701.96 14.46 4321.01 985.73 12744.31 00:11:40.209 PCIE (0000:00:12.0) NSID 1 from core 2: 3701.96 14.46 4321.38 924.63 12798.09 00:11:40.209 PCIE (0000:00:12.0) NSID 2 from core 2: 3701.96 14.46 4321.08 856.62 16278.45 00:11:40.209 PCIE (0000:00:12.0) NSID 3 from core 2: 3701.96 14.46 4321.00 799.66 13305.74 00:11:40.209 ======================================================== 00:11:40.209 Total : 22211.76 86.76 4320.83 799.66 16781.81 00:11:40.209 00:11:40.209 ************************************ 00:11:40.209 END TEST nvme_multi_secondary 00:11:40.209 ************************************ 00:11:40.209 10:00:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 70461 00:11:40.209 10:00:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 70462 00:11:40.209 00:11:40.209 real 0m10.668s 00:11:40.209 user 0m18.618s 00:11:40.209 sys 0m0.863s 00:11:40.209 10:00:29 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:40.209 10:00:29 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:40.209 10:00:29 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:40.209 10:00:29 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:40.209 10:00:29 nvme -- common/autotest_common.sh@1088 -- # [[ -e /proc/69406 ]] 00:11:40.209 10:00:29 nvme -- common/autotest_common.sh@1089 -- # kill 69406 00:11:40.209 10:00:29 nvme -- common/autotest_common.sh@1090 -- # wait 69406 00:11:40.209 [2024-06-10 10:00:29.628481] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70336) is not found. Dropping the request. 00:11:40.209 [2024-06-10 10:00:29.628595] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70336) is not found. Dropping the request. 00:11:40.209 [2024-06-10 10:00:29.628681] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70336) is not found. Dropping the request. 00:11:40.209 [2024-06-10 10:00:29.628732] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70336) is not found. Dropping the request. 00:11:40.209 [2024-06-10 10:00:29.632138] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70336) is not found. Dropping the request. 00:11:40.209 [2024-06-10 10:00:29.632193] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70336) is not found. Dropping the request. 00:11:40.209 [2024-06-10 10:00:29.632218] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70336) is not found. Dropping the request. 00:11:40.209 [2024-06-10 10:00:29.632236] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70336) is not found. Dropping the request. 00:11:40.209 [2024-06-10 10:00:29.634460] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70336) is not found. Dropping the request. 00:11:40.209 [2024-06-10 10:00:29.634523] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70336) is not found. Dropping the request. 00:11:40.209 [2024-06-10 10:00:29.634549] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70336) is not found. Dropping the request. 00:11:40.209 [2024-06-10 10:00:29.634568] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70336) is not found. Dropping the request. 00:11:40.209 [2024-06-10 10:00:29.636853] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70336) is not found. Dropping the request. 00:11:40.209 [2024-06-10 10:00:29.636903] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70336) is not found. Dropping the request. 00:11:40.209 [2024-06-10 10:00:29.636931] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70336) is not found. Dropping the request. 00:11:40.209 [2024-06-10 10:00:29.636950] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70336) is not found. Dropping the request. 00:11:40.467 [2024-06-10 10:00:29.927427] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:11:40.467 10:00:29 nvme -- common/autotest_common.sh@1092 -- # rm -f /var/run/spdk_stub0 00:11:40.467 10:00:29 nvme -- common/autotest_common.sh@1096 -- # echo 2 00:11:40.467 10:00:29 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:40.467 10:00:29 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:40.467 10:00:29 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:40.467 10:00:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:40.467 ************************************ 00:11:40.467 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:40.467 ************************************ 00:11:40.467 10:00:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:40.725 * Looking for test storage... 00:11:40.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # bdfs=() 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # local bdfs 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # bdfs=() 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # local bdfs 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # (( 4 == 0 )) 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1526 -- # echo 0000:00:10.0 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=70616 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 70616 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@830 -- # '[' -z 70616 ']' 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:40.725 10:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:40.725 [2024-06-10 10:00:30.216734] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:11:40.725 [2024-06-10 10:00:30.216869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70616 ] 00:11:40.985 [2024-06-10 10:00:30.405086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.243 [2024-06-10 10:00:30.646251] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.243 [2024-06-10 10:00:30.646401] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.243 [2024-06-10 10:00:30.646535] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.243 [2024-06-10 10:00:30.646836] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.177 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:42.177 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@863 -- # return 0 00:11:42.178 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:42.178 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.178 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:42.178 nvme0n1 00:11:42.178 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.178 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:42.178 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_kZJ9J.txt 00:11:42.178 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:42.178 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.178 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:42.178 true 00:11:42.178 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.178 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:42.178 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1718013631 00:11:42.178 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=70644 00:11:42.178 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:42.178 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:42.178 10:00:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:44.078 [2024-06-10 10:00:33.476894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:11:44.078 [2024-06-10 10:00:33.477691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:44.078 [2024-06-10 10:00:33.477844] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:44.078 [2024-06-10 10:00:33.477942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.078 [2024-06-10 10:00:33.479910] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:44.078 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 70644 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 70644 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 70644 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_kZJ9J.txt 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_kZJ9J.txt 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 70616 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@949 -- # '[' -z 70616 ']' 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # kill -0 70616 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # uname 00:11:44.078 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:44.334 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 70616 00:11:44.334 killing process with pid 70616 00:11:44.334 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:44.334 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:44.334 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # echo 'killing process with pid 70616' 00:11:44.334 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # kill 70616 00:11:44.334 10:00:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # wait 70616 00:11:46.286 10:00:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:46.286 10:00:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:46.286 00:11:46.287 real 0m5.819s 00:11:46.287 user 0m20.048s 00:11:46.287 sys 0m0.567s 00:11:46.287 10:00:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:46.287 10:00:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:46.287 ************************************ 00:11:46.287 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:46.287 ************************************ 00:11:46.545 10:00:35 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:46.545 10:00:35 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:46.545 10:00:35 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:46.545 10:00:35 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:46.545 10:00:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:46.545 ************************************ 00:11:46.545 START TEST nvme_fio 00:11:46.545 ************************************ 00:11:46.545 10:00:35 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # nvme_fio_test 00:11:46.545 10:00:35 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:46.545 10:00:35 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:46.545 10:00:35 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:46.545 10:00:35 nvme.nvme_fio -- common/autotest_common.sh@1512 -- # bdfs=() 00:11:46.545 10:00:35 nvme.nvme_fio -- common/autotest_common.sh@1512 -- # local bdfs 00:11:46.545 10:00:35 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:46.545 10:00:35 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:46.545 10:00:35 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:11:46.545 10:00:35 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # (( 4 == 0 )) 00:11:46.545 10:00:35 nvme.nvme_fio -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:46.545 10:00:35 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:46.545 10:00:35 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:46.545 10:00:35 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:46.545 10:00:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:46.545 10:00:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:46.804 10:00:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:46.804 10:00:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:47.063 10:00:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:47.063 10:00:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:47.063 10:00:36 nvme.nvme_fio -- common/autotest_common.sh@1359 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:47.063 10:00:36 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:11:47.063 10:00:36 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:47.063 10:00:36 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # local sanitizers 00:11:47.063 10:00:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:47.063 10:00:36 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # shift 00:11:47.063 10:00:36 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local asan_lib= 00:11:47.063 10:00:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:11:47.063 10:00:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:47.063 10:00:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # grep libasan 00:11:47.063 10:00:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:11:47.063 10:00:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:47.063 10:00:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:47.063 10:00:36 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # break 00:11:47.063 10:00:36 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:47.063 10:00:36 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:47.322 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:47.322 fio-3.35 00:11:47.322 Starting 1 thread 00:11:50.621 00:11:50.621 test: (groupid=0, jobs=1): err= 0: pid=70800: Mon Jun 10 10:00:39 2024 00:11:50.621 read: IOPS=16.1k, BW=62.8MiB/s (65.9MB/s)(126MiB/2001msec) 00:11:50.621 slat (usec): min=4, max=118, avg= 6.06, stdev= 1.92 00:11:50.621 clat (usec): min=319, max=10508, avg=3957.52, stdev=578.31 00:11:50.621 lat (usec): min=324, max=10627, avg=3963.58, stdev=579.14 00:11:50.621 clat percentiles (usec): 00:11:50.621 | 1.00th=[ 2835], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3687], 00:11:50.621 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3884], 00:11:50.621 | 70.00th=[ 3982], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4621], 00:11:50.621 | 99.00th=[ 7046], 99.50th=[ 7701], 99.90th=[ 8356], 99.95th=[ 9110], 00:11:50.621 | 99.99th=[10290] 00:11:50.621 bw ( KiB/s): min=60176, max=66792, per=98.21%, avg=63181.33, stdev=3349.28, samples=3 00:11:50.621 iops : min=15044, max=16698, avg=15795.33, stdev=837.32, samples=3 00:11:50.621 write: IOPS=16.1k, BW=63.0MiB/s (66.0MB/s)(126MiB/2001msec); 0 zone resets 00:11:50.621 slat (nsec): min=4793, max=54274, avg=6244.26, stdev=1824.31 00:11:50.621 clat (usec): min=284, max=10347, avg=3959.78, stdev=585.02 00:11:50.621 lat (usec): min=296, max=10359, avg=3966.02, stdev=585.81 00:11:50.621 clat percentiles (usec): 00:11:50.621 | 1.00th=[ 2802], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3687], 00:11:50.621 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3884], 00:11:50.621 | 70.00th=[ 3982], 80.00th=[ 4228], 90.00th=[ 4490], 95.00th=[ 4621], 00:11:50.621 | 99.00th=[ 7177], 99.50th=[ 7701], 99.90th=[ 8455], 99.95th=[ 9241], 00:11:50.621 | 99.99th=[10159] 00:11:50.621 bw ( KiB/s): min=59488, max=66224, per=97.53%, avg=62885.33, stdev=3368.38, samples=3 00:11:50.621 iops : min=14872, max=16556, avg=15721.33, stdev=842.10, samples=3 00:11:50.621 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:11:50.621 lat (msec) : 2=0.07%, 4=72.17%, 10=27.70%, 20=0.02% 00:11:50.621 cpu : usr=98.95%, sys=0.10%, ctx=5, majf=0, minf=608 00:11:50.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:50.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.621 issued rwts: total=32182,32254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.621 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.621 00:11:50.621 Run status group 0 (all jobs): 00:11:50.621 READ: bw=62.8MiB/s (65.9MB/s), 62.8MiB/s-62.8MiB/s (65.9MB/s-65.9MB/s), io=126MiB (132MB), run=2001-2001msec 00:11:50.621 WRITE: bw=63.0MiB/s (66.0MB/s), 63.0MiB/s-63.0MiB/s (66.0MB/s-66.0MB/s), io=126MiB (132MB), run=2001-2001msec 00:11:50.621 ----------------------------------------------------- 00:11:50.621 Suppressions used: 00:11:50.621 count bytes template 00:11:50.621 1 32 /usr/src/fio/parse.c 00:11:50.621 1 8 libtcmalloc_minimal.so 00:11:50.621 ----------------------------------------------------- 00:11:50.621 00:11:50.621 10:00:39 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:50.621 10:00:39 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:50.621 10:00:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:50.621 10:00:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:50.880 10:00:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:50.880 10:00:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:51.138 10:00:40 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:51.138 10:00:40 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:51.138 10:00:40 nvme.nvme_fio -- common/autotest_common.sh@1359 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:51.138 10:00:40 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:11:51.138 10:00:40 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:51.138 10:00:40 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # local sanitizers 00:11:51.138 10:00:40 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:51.138 10:00:40 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # shift 00:11:51.138 10:00:40 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local asan_lib= 00:11:51.138 10:00:40 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:11:51.138 10:00:40 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:51.138 10:00:40 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # grep libasan 00:11:51.138 10:00:40 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:11:51.138 10:00:40 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:51.138 10:00:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:51.138 10:00:40 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # break 00:11:51.138 10:00:40 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:51.138 10:00:40 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:51.401 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:51.401 fio-3.35 00:11:51.401 Starting 1 thread 00:11:54.693 00:11:54.693 test: (groupid=0, jobs=1): err= 0: pid=70862: Mon Jun 10 10:00:43 2024 00:11:54.693 read: IOPS=15.6k, BW=60.7MiB/s (63.7MB/s)(122MiB/2001msec) 00:11:54.693 slat (nsec): min=4367, max=54413, avg=6373.66, stdev=1946.58 00:11:54.693 clat (usec): min=414, max=13915, avg=4096.47, stdev=758.84 00:11:54.694 lat (usec): min=420, max=13960, avg=4102.85, stdev=759.70 00:11:54.694 clat percentiles (usec): 00:11:54.694 | 1.00th=[ 2540], 5.00th=[ 3326], 10.00th=[ 3458], 20.00th=[ 3589], 00:11:54.694 | 30.00th=[ 3687], 40.00th=[ 3818], 50.00th=[ 4146], 60.00th=[ 4228], 00:11:54.694 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5145], 00:11:54.694 | 99.00th=[ 7111], 99.50th=[ 7767], 99.90th=[ 9372], 99.95th=[11994], 00:11:54.694 | 99.99th=[13698] 00:11:54.694 bw ( KiB/s): min=60792, max=61840, per=98.33%, avg=61165.33, stdev=585.39, samples=3 00:11:54.694 iops : min=15198, max=15460, avg=15291.33, stdev=146.35, samples=3 00:11:54.694 write: IOPS=15.6k, BW=60.8MiB/s (63.7MB/s)(122MiB/2001msec); 0 zone resets 00:11:54.694 slat (nsec): min=4731, max=48510, avg=6588.28, stdev=1987.00 00:11:54.694 clat (usec): min=297, max=13760, avg=4104.04, stdev=770.55 00:11:54.694 lat (usec): min=304, max=13769, avg=4110.62, stdev=771.40 00:11:54.694 clat percentiles (usec): 00:11:54.694 | 1.00th=[ 2507], 5.00th=[ 3326], 10.00th=[ 3490], 20.00th=[ 3589], 00:11:54.694 | 30.00th=[ 3687], 40.00th=[ 3818], 50.00th=[ 4146], 60.00th=[ 4228], 00:11:54.694 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5211], 00:11:54.694 | 99.00th=[ 7177], 99.50th=[ 7898], 99.90th=[10290], 99.95th=[12256], 00:11:54.694 | 99.99th=[13435] 00:11:54.694 bw ( KiB/s): min=59720, max=62208, per=97.57%, avg=60714.67, stdev=1316.83, samples=3 00:11:54.694 iops : min=14930, max=15552, avg=15178.67, stdev=329.21, samples=3 00:11:54.694 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:54.694 lat (msec) : 2=0.27%, 4=45.80%, 10=53.80%, 20=0.10% 00:11:54.694 cpu : usr=98.95%, sys=0.10%, ctx=3, majf=0, minf=607 00:11:54.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:54.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:54.694 issued rwts: total=31117,31129,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.694 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:54.694 00:11:54.694 Run status group 0 (all jobs): 00:11:54.694 READ: bw=60.7MiB/s (63.7MB/s), 60.7MiB/s-60.7MiB/s (63.7MB/s-63.7MB/s), io=122MiB (127MB), run=2001-2001msec 00:11:54.694 WRITE: bw=60.8MiB/s (63.7MB/s), 60.8MiB/s-60.8MiB/s (63.7MB/s-63.7MB/s), io=122MiB (128MB), run=2001-2001msec 00:11:54.694 ----------------------------------------------------- 00:11:54.694 Suppressions used: 00:11:54.694 count bytes template 00:11:54.694 1 32 /usr/src/fio/parse.c 00:11:54.694 1 8 libtcmalloc_minimal.so 00:11:54.694 ----------------------------------------------------- 00:11:54.694 00:11:54.694 10:00:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:54.694 10:00:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:54.694 10:00:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:54.694 10:00:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:54.988 10:00:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:54.988 10:00:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:55.246 10:00:44 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:55.246 10:00:44 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:55.246 10:00:44 nvme.nvme_fio -- common/autotest_common.sh@1359 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:55.246 10:00:44 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:11:55.246 10:00:44 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:55.246 10:00:44 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # local sanitizers 00:11:55.246 10:00:44 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:55.246 10:00:44 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # shift 00:11:55.246 10:00:44 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local asan_lib= 00:11:55.246 10:00:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:11:55.246 10:00:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:55.246 10:00:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # grep libasan 00:11:55.246 10:00:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:11:55.246 10:00:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:55.246 10:00:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:55.246 10:00:44 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # break 00:11:55.247 10:00:44 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:55.247 10:00:44 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:55.247 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:55.247 fio-3.35 00:11:55.247 Starting 1 thread 00:11:58.540 00:11:58.540 test: (groupid=0, jobs=1): err= 0: pid=70917: Mon Jun 10 10:00:47 2024 00:11:58.540 read: IOPS=14.8k, BW=57.7MiB/s (60.5MB/s)(115MiB/2001msec) 00:11:58.540 slat (nsec): min=4674, max=59549, avg=6663.36, stdev=2217.24 00:11:58.540 clat (usec): min=325, max=8534, avg=4308.55, stdev=633.09 00:11:58.540 lat (usec): min=331, max=8540, avg=4315.22, stdev=633.94 00:11:58.540 clat percentiles (usec): 00:11:58.540 | 1.00th=[ 3228], 5.00th=[ 3556], 10.00th=[ 3654], 20.00th=[ 3752], 00:11:58.540 | 30.00th=[ 3884], 40.00th=[ 4146], 50.00th=[ 4293], 60.00th=[ 4424], 00:11:58.540 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5145], 95.00th=[ 5276], 00:11:58.540 | 99.00th=[ 6325], 99.50th=[ 7242], 99.90th=[ 8029], 99.95th=[ 8160], 00:11:58.540 | 99.99th=[ 8356] 00:11:58.540 bw ( KiB/s): min=55752, max=63768, per=100.00%, avg=60362.67, stdev=4141.70, samples=3 00:11:58.540 iops : min=13938, max=15942, avg=15090.67, stdev=1035.43, samples=3 00:11:58.540 write: IOPS=14.8k, BW=57.7MiB/s (60.5MB/s)(116MiB/2001msec); 0 zone resets 00:11:58.540 slat (nsec): min=4768, max=48547, avg=6845.40, stdev=2167.79 00:11:58.540 clat (usec): min=298, max=11806, avg=4322.67, stdev=694.85 00:11:58.540 lat (usec): min=306, max=11817, avg=4329.52, stdev=695.75 00:11:58.540 clat percentiles (usec): 00:11:58.540 | 1.00th=[ 3228], 5.00th=[ 3556], 10.00th=[ 3654], 20.00th=[ 3752], 00:11:58.540 | 30.00th=[ 3884], 40.00th=[ 4146], 50.00th=[ 4293], 60.00th=[ 4424], 00:11:58.540 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5145], 95.00th=[ 5276], 00:11:58.540 | 99.00th=[ 6652], 99.50th=[ 7570], 99.90th=[10683], 99.95th=[11207], 00:11:58.540 | 99.99th=[11731] 00:11:58.540 bw ( KiB/s): min=56072, max=63232, per=100.00%, avg=60032.00, stdev=3640.00, samples=3 00:11:58.540 iops : min=14018, max=15808, avg=15008.00, stdev=910.00, samples=3 00:11:58.540 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:11:58.540 lat (msec) : 2=0.09%, 4=34.21%, 10=65.57%, 20=0.10% 00:11:58.540 cpu : usr=98.80%, sys=0.20%, ctx=4, majf=0, minf=608 00:11:58.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:58.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:58.540 issued rwts: total=29541,29571,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.540 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:58.540 00:11:58.540 Run status group 0 (all jobs): 00:11:58.540 READ: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=115MiB (121MB), run=2001-2001msec 00:11:58.540 WRITE: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=116MiB (121MB), run=2001-2001msec 00:11:58.798 ----------------------------------------------------- 00:11:58.798 Suppressions used: 00:11:58.798 count bytes template 00:11:58.798 1 32 /usr/src/fio/parse.c 00:11:58.798 1 8 libtcmalloc_minimal.so 00:11:58.798 ----------------------------------------------------- 00:11:58.798 00:11:58.798 10:00:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:58.798 10:00:48 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:58.799 10:00:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:58.799 10:00:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:59.056 10:00:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:59.056 10:00:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:59.313 10:00:48 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:59.314 10:00:48 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:59.314 10:00:48 nvme.nvme_fio -- common/autotest_common.sh@1359 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:59.314 10:00:48 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:11:59.314 10:00:48 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:59.314 10:00:48 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # local sanitizers 00:11:59.314 10:00:48 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:59.314 10:00:48 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # shift 00:11:59.314 10:00:48 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local asan_lib= 00:11:59.314 10:00:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:11:59.314 10:00:48 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:59.314 10:00:48 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # grep libasan 00:11:59.314 10:00:48 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:11:59.314 10:00:48 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:59.314 10:00:48 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:59.314 10:00:48 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # break 00:11:59.314 10:00:48 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:59.314 10:00:48 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:59.572 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:59.572 fio-3.35 00:11:59.572 Starting 1 thread 00:12:02.855 00:12:02.855 test: (groupid=0, jobs=1): err= 0: pid=70985: Mon Jun 10 10:00:52 2024 00:12:02.855 read: IOPS=15.3k, BW=59.9MiB/s (62.8MB/s)(120MiB/2001msec) 00:12:02.855 slat (nsec): min=4649, max=44219, avg=6424.94, stdev=1795.96 00:12:02.855 clat (usec): min=338, max=11304, avg=4164.53, stdev=543.73 00:12:02.855 lat (usec): min=345, max=11311, avg=4170.95, stdev=544.29 00:12:02.855 clat percentiles (usec): 00:12:02.855 | 1.00th=[ 2835], 5.00th=[ 3490], 10.00th=[ 3589], 20.00th=[ 3687], 00:12:02.855 | 30.00th=[ 3851], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4359], 00:12:02.855 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4555], 95.00th=[ 4686], 00:12:02.855 | 99.00th=[ 6259], 99.50th=[ 6980], 99.90th=[ 7635], 99.95th=[ 7832], 00:12:02.855 | 99.99th=[ 8586] 00:12:02.855 bw ( KiB/s): min=59616, max=61344, per=98.25%, avg=60277.33, stdev=932.59, samples=3 00:12:02.855 iops : min=14904, max=15336, avg=15069.33, stdev=233.15, samples=3 00:12:02.855 write: IOPS=15.4k, BW=60.0MiB/s (62.9MB/s)(120MiB/2001msec); 0 zone resets 00:12:02.855 slat (nsec): min=4824, max=76945, avg=6649.40, stdev=1926.96 00:12:02.855 clat (usec): min=294, max=8423, avg=4142.98, stdev=531.72 00:12:02.855 lat (usec): min=301, max=8429, avg=4149.63, stdev=532.32 00:12:02.855 clat percentiles (usec): 00:12:02.855 | 1.00th=[ 2835], 5.00th=[ 3490], 10.00th=[ 3556], 20.00th=[ 3687], 00:12:02.855 | 30.00th=[ 3818], 40.00th=[ 4146], 50.00th=[ 4293], 60.00th=[ 4359], 00:12:02.855 | 70.00th=[ 4424], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4686], 00:12:02.855 | 99.00th=[ 6128], 99.50th=[ 7046], 99.90th=[ 7832], 99.95th=[ 8029], 00:12:02.855 | 99.99th=[ 8291] 00:12:02.855 bw ( KiB/s): min=58944, max=60768, per=97.59%, avg=59968.00, stdev=932.40, samples=3 00:12:02.855 iops : min=14736, max=15192, avg=14992.00, stdev=233.10, samples=3 00:12:02.855 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:02.855 lat (msec) : 2=0.12%, 4=35.90%, 10=63.95%, 20=0.01% 00:12:02.855 cpu : usr=98.95%, sys=0.05%, ctx=4, majf=0, minf=605 00:12:02.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:02.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:02.855 issued rwts: total=30692,30739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:02.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:02.855 00:12:02.855 Run status group 0 (all jobs): 00:12:02.855 READ: bw=59.9MiB/s (62.8MB/s), 59.9MiB/s-59.9MiB/s (62.8MB/s-62.8MB/s), io=120MiB (126MB), run=2001-2001msec 00:12:02.855 WRITE: bw=60.0MiB/s (62.9MB/s), 60.0MiB/s-60.0MiB/s (62.9MB/s-62.9MB/s), io=120MiB (126MB), run=2001-2001msec 00:12:03.114 ----------------------------------------------------- 00:12:03.114 Suppressions used: 00:12:03.114 count bytes template 00:12:03.114 1 32 /usr/src/fio/parse.c 00:12:03.114 1 8 libtcmalloc_minimal.so 00:12:03.114 ----------------------------------------------------- 00:12:03.114 00:12:03.114 10:00:52 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:03.114 10:00:52 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:12:03.114 00:12:03.114 real 0m16.639s 00:12:03.114 user 0m13.369s 00:12:03.114 sys 0m1.661s 00:12:03.114 10:00:52 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:03.114 ************************************ 00:12:03.114 10:00:52 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:12:03.114 END TEST nvme_fio 00:12:03.114 ************************************ 00:12:03.114 ************************************ 00:12:03.114 END TEST nvme 00:12:03.114 ************************************ 00:12:03.114 00:12:03.114 real 1m29.784s 00:12:03.114 user 3m43.179s 00:12:03.114 sys 0m13.725s 00:12:03.114 10:00:52 nvme -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:03.114 10:00:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:03.114 10:00:52 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:12:03.114 10:00:52 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:03.114 10:00:52 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:03.114 10:00:52 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:03.114 10:00:52 -- common/autotest_common.sh@10 -- # set +x 00:12:03.114 ************************************ 00:12:03.114 START TEST nvme_scc 00:12:03.114 ************************************ 00:12:03.114 10:00:52 nvme_scc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:03.372 * Looking for test storage... 00:12:03.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:03.372 10:00:52 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:03.372 10:00:52 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:03.372 10:00:52 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:03.372 10:00:52 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:03.372 10:00:52 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:03.372 10:00:52 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.372 10:00:52 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.372 10:00:52 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.372 10:00:52 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.372 10:00:52 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.372 10:00:52 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.372 10:00:52 nvme_scc -- paths/export.sh@5 -- # export PATH 00:12:03.372 10:00:52 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.372 10:00:52 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:12:03.372 10:00:52 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:03.372 10:00:52 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:12:03.372 10:00:52 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:03.372 10:00:52 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:12:03.372 10:00:52 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:03.372 10:00:52 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:03.372 10:00:52 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:03.372 10:00:52 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:12:03.372 10:00:52 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:03.372 10:00:52 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:12:03.372 10:00:52 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:03.372 10:00:52 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:03.372 10:00:52 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:03.630 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:03.887 Waiting for block devices as requested 00:12:03.887 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:03.887 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.162 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.162 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:09.461 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:09.461 10:00:58 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:09.461 10:00:58 nvme_scc -- scripts/common.sh@15 -- # local i 00:12:09.461 10:00:58 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:09.461 10:00:58 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:09.461 10:00:58 nvme_scc -- scripts/common.sh@24 -- # return 0 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.461 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.462 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:09.463 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.464 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:09.465 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:09.466 10:00:58 nvme_scc -- scripts/common.sh@15 -- # local i 00:12:09.466 10:00:58 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:09.466 10:00:58 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:09.466 10:00:58 nvme_scc -- scripts/common.sh@24 -- # return 0 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.466 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.467 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:09.468 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:09.469 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:09.470 10:00:58 nvme_scc -- scripts/common.sh@15 -- # local i 00:12:09.470 10:00:58 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:09.470 10:00:58 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:09.470 10:00:58 nvme_scc -- scripts/common.sh@24 -- # return 0 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.470 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:09.471 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:09.472 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.473 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.474 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:09.475 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:09.476 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.477 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:09.478 10:00:58 nvme_scc -- scripts/common.sh@15 -- # local i 00:12:09.478 10:00:58 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:09.478 10:00:58 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:09.478 10:00:58 nvme_scc -- scripts/common.sh@24 -- # return 0 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:09.478 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.479 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.480 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:09.481 10:00:58 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:12:09.481 10:00:58 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:09.482 10:00:58 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:12:09.482 10:00:58 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:12:09.482 10:00:58 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:12:09.482 10:00:58 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:12:09.738 10:00:58 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:12:09.738 10:00:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:12:09.738 10:00:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:09.738 10:00:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:09.738 10:00:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:12:09.739 10:00:58 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:12:09.739 10:00:58 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:12:09.739 10:00:58 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:12:09.739 10:00:58 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:09.995 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:10.930 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:10.930 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:10.930 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:10.930 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:10.930 10:01:00 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:10.930 10:01:00 nvme_scc -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:12:10.930 10:01:00 nvme_scc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:10.930 10:01:00 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:10.930 ************************************ 00:12:10.930 START TEST nvme_simple_copy 00:12:10.930 ************************************ 00:12:10.930 10:01:00 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:11.189 Initializing NVMe Controllers 00:12:11.189 Attaching to 0000:00:10.0 00:12:11.189 Controller supports SCC. Attached to 0000:00:10.0 00:12:11.189 Namespace ID: 1 size: 6GB 00:12:11.189 Initialization complete. 00:12:11.189 00:12:11.189 Controller QEMU NVMe Ctrl (12340 ) 00:12:11.189 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:12:11.189 Namespace Block Size:4096 00:12:11.189 Writing LBAs 0 to 63 with Random Data 00:12:11.189 Copied LBAs from 0 - 63 to the Destination LBA 256 00:12:11.189 LBAs matching Written Data: 64 00:12:11.189 00:12:11.189 real 0m0.311s 00:12:11.189 user 0m0.134s 00:12:11.189 sys 0m0.076s 00:12:11.189 10:01:00 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:11.189 10:01:00 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:12:11.189 ************************************ 00:12:11.189 END TEST nvme_simple_copy 00:12:11.189 ************************************ 00:12:11.189 00:12:11.189 real 0m8.038s 00:12:11.189 user 0m1.312s 00:12:11.189 sys 0m1.698s 00:12:11.189 10:01:00 nvme_scc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:11.189 10:01:00 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:11.189 ************************************ 00:12:11.189 END TEST nvme_scc 00:12:11.189 ************************************ 00:12:11.189 10:01:00 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:12:11.189 10:01:00 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:12:11.189 10:01:00 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:12:11.189 10:01:00 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:12:11.189 10:01:00 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:12:11.189 10:01:00 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:11.189 10:01:00 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:11.189 10:01:00 -- common/autotest_common.sh@10 -- # set +x 00:12:11.189 ************************************ 00:12:11.189 START TEST nvme_fdp 00:12:11.189 ************************************ 00:12:11.189 10:01:00 nvme_fdp -- common/autotest_common.sh@1124 -- # test/nvme/nvme_fdp.sh 00:12:11.448 * Looking for test storage... 00:12:11.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:11.448 10:01:00 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:11.448 10:01:00 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:11.448 10:01:00 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:11.448 10:01:00 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:11.448 10:01:00 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:11.448 10:01:00 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.448 10:01:00 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.448 10:01:00 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.448 10:01:00 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.449 10:01:00 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.449 10:01:00 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.449 10:01:00 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:12:11.449 10:01:00 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.449 10:01:00 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:12:11.449 10:01:00 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:11.449 10:01:00 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:12:11.449 10:01:00 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:11.449 10:01:00 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:12:11.449 10:01:00 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:11.449 10:01:00 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:11.449 10:01:00 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:11.449 10:01:00 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:12:11.449 10:01:00 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:11.449 10:01:00 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:11.708 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:11.967 Waiting for block devices as requested 00:12:11.967 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:11.967 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:12.225 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:12.225 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.512 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:17.512 10:01:06 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:17.512 10:01:06 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:17.512 10:01:06 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:17.512 10:01:06 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:17.512 10:01:06 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:17.512 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.513 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.514 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:17.515 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:17.516 10:01:06 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:17.516 10:01:06 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:17.516 10:01:06 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:17.516 10:01:06 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:17.517 10:01:06 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.517 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:17.518 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:17.519 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:17.520 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:17.521 10:01:06 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:17.521 10:01:06 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:17.521 10:01:06 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:17.521 10:01:06 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.521 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:17.522 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:17.523 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.524 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.525 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:17.526 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.527 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.528 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:17.529 10:01:06 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:17.529 10:01:06 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:17.529 10:01:06 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:17.529 10:01:06 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.529 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.530 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:17.531 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:17.532 10:01:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:17.532 10:01:07 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:12:17.532 10:01:07 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:12:17.791 10:01:07 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:12:17.791 10:01:07 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:12:17.791 10:01:07 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:12:17.791 10:01:07 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:18.049 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:18.616 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:18.616 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:18.616 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:18.873 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:18.873 10:01:08 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:18.873 10:01:08 nvme_fdp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:12:18.873 10:01:08 nvme_fdp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:18.873 10:01:08 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:18.873 ************************************ 00:12:18.873 START TEST nvme_flexible_data_placement 00:12:18.873 ************************************ 00:12:18.873 10:01:08 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:19.131 Initializing NVMe Controllers 00:12:19.131 Attaching to 0000:00:13.0 00:12:19.131 Controller supports FDP Attached to 0000:00:13.0 00:12:19.131 Namespace ID: 1 Endurance Group ID: 1 00:12:19.131 Initialization complete. 00:12:19.131 00:12:19.131 ================================== 00:12:19.131 == FDP tests for Namespace: #01 == 00:12:19.131 ================================== 00:12:19.131 00:12:19.131 Get Feature: FDP: 00:12:19.131 ================= 00:12:19.131 Enabled: Yes 00:12:19.131 FDP configuration Index: 0 00:12:19.131 00:12:19.131 FDP configurations log page 00:12:19.131 =========================== 00:12:19.131 Number of FDP configurations: 1 00:12:19.131 Version: 0 00:12:19.131 Size: 112 00:12:19.131 FDP Configuration Descriptor: 0 00:12:19.131 Descriptor Size: 96 00:12:19.131 Reclaim Group Identifier format: 2 00:12:19.131 FDP Volatile Write Cache: Not Present 00:12:19.131 FDP Configuration: Valid 00:12:19.131 Vendor Specific Size: 0 00:12:19.131 Number of Reclaim Groups: 2 00:12:19.131 Number of Recalim Unit Handles: 8 00:12:19.131 Max Placement Identifiers: 128 00:12:19.131 Number of Namespaces Suppprted: 256 00:12:19.131 Reclaim unit Nominal Size: 6000000 bytes 00:12:19.131 Estimated Reclaim Unit Time Limit: Not Reported 00:12:19.131 RUH Desc #000: RUH Type: Initially Isolated 00:12:19.131 RUH Desc #001: RUH Type: Initially Isolated 00:12:19.131 RUH Desc #002: RUH Type: Initially Isolated 00:12:19.131 RUH Desc #003: RUH Type: Initially Isolated 00:12:19.131 RUH Desc #004: RUH Type: Initially Isolated 00:12:19.131 RUH Desc #005: RUH Type: Initially Isolated 00:12:19.131 RUH Desc #006: RUH Type: Initially Isolated 00:12:19.131 RUH Desc #007: RUH Type: Initially Isolated 00:12:19.131 00:12:19.131 FDP reclaim unit handle usage log page 00:12:19.132 ====================================== 00:12:19.132 Number of Reclaim Unit Handles: 8 00:12:19.132 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:19.132 RUH Usage Desc #001: RUH Attributes: Unused 00:12:19.132 RUH Usage Desc #002: RUH Attributes: Unused 00:12:19.132 RUH Usage Desc #003: RUH Attributes: Unused 00:12:19.132 RUH Usage Desc #004: RUH Attributes: Unused 00:12:19.132 RUH Usage Desc #005: RUH Attributes: Unused 00:12:19.132 RUH Usage Desc #006: RUH Attributes: Unused 00:12:19.132 RUH Usage Desc #007: RUH Attributes: Unused 00:12:19.132 00:12:19.132 FDP statistics log page 00:12:19.132 ======================= 00:12:19.132 Host bytes with metadata written: 814407680 00:12:19.132 Media bytes with metadata written: 814505984 00:12:19.132 Media bytes erased: 0 00:12:19.132 00:12:19.132 FDP Reclaim unit handle status 00:12:19.132 ============================== 00:12:19.132 Number of RUHS descriptors: 2 00:12:19.132 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005752 00:12:19.132 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:19.132 00:12:19.132 FDP write on placement id: 0 success 00:12:19.132 00:12:19.132 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:19.132 00:12:19.132 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:19.132 00:12:19.132 Get Feature: FDP Events for Placement handle: #0 00:12:19.132 ======================== 00:12:19.132 Number of FDP Events: 6 00:12:19.132 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:19.132 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:19.132 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:19.132 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:19.132 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:19.132 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:19.132 00:12:19.132 FDP events log page 00:12:19.132 =================== 00:12:19.132 Number of FDP events: 1 00:12:19.132 FDP Event #0: 00:12:19.132 Event Type: RU Not Written to Capacity 00:12:19.132 Placement Identifier: Valid 00:12:19.132 NSID: Valid 00:12:19.132 Location: Valid 00:12:19.132 Placement Identifier: 0 00:12:19.132 Event Timestamp: 8 00:12:19.132 Namespace Identifier: 1 00:12:19.132 Reclaim Group Identifier: 0 00:12:19.132 Reclaim Unit Handle Identifier: 0 00:12:19.132 00:12:19.132 FDP test passed 00:12:19.132 00:12:19.132 real 0m0.289s 00:12:19.132 user 0m0.086s 00:12:19.132 sys 0m0.102s 00:12:19.132 10:01:08 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:19.132 ************************************ 00:12:19.132 END TEST nvme_flexible_data_placement 00:12:19.132 ************************************ 00:12:19.132 10:01:08 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:12:19.132 00:12:19.132 real 0m7.958s 00:12:19.132 user 0m1.222s 00:12:19.132 sys 0m1.707s 00:12:19.132 10:01:08 nvme_fdp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:19.132 10:01:08 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:19.132 ************************************ 00:12:19.132 END TEST nvme_fdp 00:12:19.132 ************************************ 00:12:19.132 10:01:08 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:12:19.132 10:01:08 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:19.390 10:01:08 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:19.390 10:01:08 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:19.390 10:01:08 -- common/autotest_common.sh@10 -- # set +x 00:12:19.390 ************************************ 00:12:19.390 START TEST nvme_rpc 00:12:19.390 ************************************ 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:19.390 * Looking for test storage... 00:12:19.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:19.390 10:01:08 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:19.390 10:01:08 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@1523 -- # bdfs=() 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@1523 -- # local bdfs 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@1512 -- # bdfs=() 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@1512 -- # local bdfs 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@1513 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@1514 -- # (( 4 == 0 )) 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@1526 -- # echo 0000:00:10.0 00:12:19.390 10:01:08 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:12:19.390 10:01:08 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=72315 00:12:19.390 10:01:08 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:19.390 10:01:08 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:19.390 10:01:08 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 72315 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@830 -- # '[' -z 72315 ']' 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:19.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:19.390 10:01:08 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.647 [2024-06-10 10:01:08.931268] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:12:19.647 [2024-06-10 10:01:08.931493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72315 ] 00:12:19.647 [2024-06-10 10:01:09.105751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:19.905 [2024-06-10 10:01:09.330516] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.905 [2024-06-10 10:01:09.330532] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.838 10:01:10 nvme_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:20.838 10:01:10 nvme_rpc -- common/autotest_common.sh@863 -- # return 0 00:12:20.838 10:01:10 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:12:20.838 Nvme0n1 00:12:21.095 10:01:10 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:21.095 10:01:10 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:21.095 request: 00:12:21.095 { 00:12:21.095 "bdev_name": "Nvme0n1", 00:12:21.095 "filename": "non_existing_file", 00:12:21.095 "method": "bdev_nvme_apply_firmware", 00:12:21.095 "req_id": 1 00:12:21.095 } 00:12:21.095 Got JSON-RPC error response 00:12:21.095 response: 00:12:21.095 { 00:12:21.095 "code": -32603, 00:12:21.095 "message": "open file failed." 00:12:21.095 } 00:12:21.095 10:01:10 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:21.095 10:01:10 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:21.095 10:01:10 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:21.352 10:01:10 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:21.352 10:01:10 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 72315 00:12:21.352 10:01:10 nvme_rpc -- common/autotest_common.sh@949 -- # '[' -z 72315 ']' 00:12:21.352 10:01:10 nvme_rpc -- common/autotest_common.sh@953 -- # kill -0 72315 00:12:21.352 10:01:10 nvme_rpc -- common/autotest_common.sh@954 -- # uname 00:12:21.352 10:01:10 nvme_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:21.352 10:01:10 nvme_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 72315 00:12:21.610 10:01:10 nvme_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:21.610 10:01:10 nvme_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:21.610 killing process with pid 72315 00:12:21.610 10:01:10 nvme_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 72315' 00:12:21.610 10:01:10 nvme_rpc -- common/autotest_common.sh@968 -- # kill 72315 00:12:21.610 10:01:10 nvme_rpc -- common/autotest_common.sh@973 -- # wait 72315 00:12:23.512 ************************************ 00:12:23.512 END TEST nvme_rpc 00:12:23.512 ************************************ 00:12:23.512 00:12:23.512 real 0m4.264s 00:12:23.512 user 0m8.048s 00:12:23.512 sys 0m0.583s 00:12:23.512 10:01:12 nvme_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:23.512 10:01:12 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.512 10:01:12 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:23.512 10:01:12 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:23.512 10:01:12 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:23.512 10:01:12 -- common/autotest_common.sh@10 -- # set +x 00:12:23.512 ************************************ 00:12:23.512 START TEST nvme_rpc_timeouts 00:12:23.512 ************************************ 00:12:23.512 10:01:12 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:23.770 * Looking for test storage... 00:12:23.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:23.771 10:01:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:23.771 10:01:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_72391 00:12:23.771 10:01:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_72391 00:12:23.771 10:01:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=72415 00:12:23.771 10:01:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:23.771 10:01:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:23.771 10:01:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 72415 00:12:23.771 10:01:13 nvme_rpc_timeouts -- common/autotest_common.sh@830 -- # '[' -z 72415 ']' 00:12:23.771 10:01:13 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.771 10:01:13 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:23.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.771 10:01:13 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.771 10:01:13 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:23.771 10:01:13 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:23.771 [2024-06-10 10:01:13.165836] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:12:23.771 [2024-06-10 10:01:13.166030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72415 ] 00:12:24.029 [2024-06-10 10:01:13.340120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:24.286 [2024-06-10 10:01:13.570241] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.286 [2024-06-10 10:01:13.570252] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.852 Checking default timeout settings: 00:12:24.852 10:01:14 nvme_rpc_timeouts -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:24.852 10:01:14 nvme_rpc_timeouts -- common/autotest_common.sh@863 -- # return 0 00:12:24.852 10:01:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:24.852 10:01:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:25.427 Making settings changes with rpc: 00:12:25.427 10:01:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:25.427 10:01:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:25.684 Check default vs. modified settings: 00:12:25.684 10:01:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:25.684 10:01:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_72391 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_72391 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:25.943 Setting action_on_timeout is changed as expected. 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_72391 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_72391 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:25.943 Setting timeout_us is changed as expected. 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_72391 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_72391 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:25.943 Setting timeout_admin_us is changed as expected. 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_72391 /tmp/settings_modified_72391 00:12:25.943 10:01:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 72415 00:12:25.943 10:01:15 nvme_rpc_timeouts -- common/autotest_common.sh@949 -- # '[' -z 72415 ']' 00:12:25.943 10:01:15 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # kill -0 72415 00:12:25.943 10:01:15 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # uname 00:12:25.943 10:01:15 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:25.943 10:01:15 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 72415 00:12:25.943 killing process with pid 72415 00:12:25.943 10:01:15 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:25.943 10:01:15 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:25.943 10:01:15 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # echo 'killing process with pid 72415' 00:12:25.943 10:01:15 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # kill 72415 00:12:25.943 10:01:15 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # wait 72415 00:12:28.473 RPC TIMEOUT SETTING TEST PASSED. 00:12:28.473 10:01:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:28.473 ************************************ 00:12:28.473 END TEST nvme_rpc_timeouts 00:12:28.473 ************************************ 00:12:28.473 00:12:28.473 real 0m4.647s 00:12:28.473 user 0m8.914s 00:12:28.473 sys 0m0.580s 00:12:28.473 10:01:17 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:28.473 10:01:17 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:28.473 10:01:17 -- spdk/autotest.sh@243 -- # uname -s 00:12:28.473 10:01:17 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:12:28.473 10:01:17 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:28.473 10:01:17 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:28.473 10:01:17 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:28.473 10:01:17 -- common/autotest_common.sh@10 -- # set +x 00:12:28.473 ************************************ 00:12:28.473 START TEST sw_hotplug 00:12:28.473 ************************************ 00:12:28.474 10:01:17 sw_hotplug -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:28.474 * Looking for test storage... 00:12:28.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:28.474 10:01:17 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:28.731 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:28.731 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:28.731 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:28.731 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:28.731 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:28.990 10:01:18 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:12:28.990 10:01:18 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:12:28.990 10:01:18 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:12:28.990 10:01:18 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@230 -- # local class 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:12:28.990 10:01:18 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:28.990 10:01:18 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:12:28.990 10:01:18 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:12:28.991 10:01:18 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:29.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:29.508 Waiting for block devices as requested 00:12:29.508 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:29.508 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:29.766 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:29.766 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:35.038 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:35.038 10:01:24 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:12:35.038 10:01:24 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:35.297 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:12:35.297 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:35.297 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:12:35.646 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:12:35.904 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:35.904 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:35.904 10:01:25 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:12:35.904 10:01:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:36.163 10:01:25 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:12:36.163 10:01:25 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:12:36.163 10:01:25 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=73273 00:12:36.163 10:01:25 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:12:36.163 10:01:25 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:12:36.163 10:01:25 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:36.163 10:01:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:12:36.163 10:01:25 sw_hotplug -- common/autotest_common.sh@706 -- # local cmd_es=0 00:12:36.163 10:01:25 sw_hotplug -- common/autotest_common.sh@708 -- # [[ -t 0 ]] 00:12:36.163 10:01:25 sw_hotplug -- common/autotest_common.sh@708 -- # exec 00:12:36.163 10:01:25 sw_hotplug -- common/autotest_common.sh@710 -- # local time=0 TIMEFORMAT=%2R 00:12:36.163 10:01:25 sw_hotplug -- common/autotest_common.sh@716 -- # remove_attach_helper 3 6 false 00:12:36.163 10:01:25 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:36.163 10:01:25 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:36.163 10:01:25 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:12:36.163 10:01:25 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:36.163 10:01:25 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:36.422 Initializing NVMe Controllers 00:12:36.422 Attaching to 0000:00:10.0 00:12:36.422 Attaching to 0000:00:11.0 00:12:36.422 Attached to 0000:00:10.0 00:12:36.422 Attached to 0000:00:11.0 00:12:36.422 Initialization complete. Starting I/O... 00:12:36.422 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:12:36.422 QEMU NVMe Ctrl (12341 ): 1 I/Os completed (+1) 00:12:36.422 00:12:37.358 QEMU NVMe Ctrl (12340 ): 1056 I/Os completed (+1056) 00:12:37.358 QEMU NVMe Ctrl (12341 ): 1157 I/Os completed (+1156) 00:12:37.358 00:12:38.322 QEMU NVMe Ctrl (12340 ): 2439 I/Os completed (+1383) 00:12:38.322 QEMU NVMe Ctrl (12341 ): 2595 I/Os completed (+1438) 00:12:38.322 00:12:39.257 QEMU NVMe Ctrl (12340 ): 3995 I/Os completed (+1556) 00:12:39.257 QEMU NVMe Ctrl (12341 ): 4260 I/Os completed (+1665) 00:12:39.257 00:12:40.633 QEMU NVMe Ctrl (12340 ): 5619 I/Os completed (+1624) 00:12:40.633 QEMU NVMe Ctrl (12341 ): 6078 I/Os completed (+1818) 00:12:40.633 00:12:41.566 QEMU NVMe Ctrl (12340 ): 7327 I/Os completed (+1708) 00:12:41.566 QEMU NVMe Ctrl (12341 ): 7876 I/Os completed (+1798) 00:12:41.566 00:12:42.132 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:42.132 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:42.132 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:42.132 [2024-06-10 10:01:31.496358] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:42.132 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:42.132 [2024-06-10 10:01:31.498898] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.132 [2024-06-10 10:01:31.499116] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.132 [2024-06-10 10:01:31.499168] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.132 [2024-06-10 10:01:31.499198] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.132 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:42.132 [2024-06-10 10:01:31.502426] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.132 [2024-06-10 10:01:31.502495] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.132 [2024-06-10 10:01:31.502530] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.132 [2024-06-10 10:01:31.502554] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.132 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:42.132 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:42.132 [2024-06-10 10:01:31.524455] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:42.132 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:42.132 [2024-06-10 10:01:31.526462] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.132 [2024-06-10 10:01:31.526534] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.132 [2024-06-10 10:01:31.526571] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.132 [2024-06-10 10:01:31.526608] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.132 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:42.132 [2024-06-10 10:01:31.529541] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.132 [2024-06-10 10:01:31.529604] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.132 [2024-06-10 10:01:31.529635] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.132 [2024-06-10 10:01:31.529899] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.132 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/subsystem_vendor 00:12:42.132 EAL: Scan for (pci) bus failed. 00:12:42.132 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:42.132 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:42.132 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:42.132 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:42.132 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:42.390 00:12:42.390 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:42.390 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:42.390 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:42.390 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:42.390 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:42.390 Attaching to 0000:00:10.0 00:12:42.390 Attached to 0000:00:10.0 00:12:42.390 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:42.390 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:42.390 10:01:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:42.390 Attaching to 0000:00:11.0 00:12:42.390 Attached to 0000:00:11.0 00:12:43.326 QEMU NVMe Ctrl (12340 ): 1636 I/Os completed (+1636) 00:12:43.326 QEMU NVMe Ctrl (12341 ): 1584 I/Os completed (+1584) 00:12:43.326 00:12:44.260 QEMU NVMe Ctrl (12340 ): 3324 I/Os completed (+1688) 00:12:44.260 QEMU NVMe Ctrl (12341 ): 3423 I/Os completed (+1839) 00:12:44.260 00:12:45.634 QEMU NVMe Ctrl (12340 ): 4988 I/Os completed (+1664) 00:12:45.634 QEMU NVMe Ctrl (12341 ): 5207 I/Os completed (+1784) 00:12:45.634 00:12:46.567 QEMU NVMe Ctrl (12340 ): 6741 I/Os completed (+1753) 00:12:46.567 QEMU NVMe Ctrl (12341 ): 7079 I/Os completed (+1872) 00:12:46.567 00:12:47.509 QEMU NVMe Ctrl (12340 ): 8469 I/Os completed (+1728) 00:12:47.509 QEMU NVMe Ctrl (12341 ): 8897 I/Os completed (+1818) 00:12:47.509 00:12:48.444 QEMU NVMe Ctrl (12340 ): 10125 I/Os completed (+1656) 00:12:48.444 QEMU NVMe Ctrl (12341 ): 10660 I/Os completed (+1763) 00:12:48.444 00:12:49.379 QEMU NVMe Ctrl (12340 ): 11957 I/Os completed (+1832) 00:12:49.379 QEMU NVMe Ctrl (12341 ): 12532 I/Os completed (+1872) 00:12:49.379 00:12:50.312 QEMU NVMe Ctrl (12340 ): 13701 I/Os completed (+1744) 00:12:50.312 QEMU NVMe Ctrl (12341 ): 14400 I/Os completed (+1868) 00:12:50.312 00:12:51.246 QEMU NVMe Ctrl (12340 ): 15387 I/Os completed (+1686) 00:12:51.246 QEMU NVMe Ctrl (12341 ): 16272 I/Os completed (+1872) 00:12:51.246 00:12:52.619 QEMU NVMe Ctrl (12340 ): 17123 I/Os completed (+1736) 00:12:52.619 QEMU NVMe Ctrl (12341 ): 18083 I/Os completed (+1811) 00:12:52.619 00:12:53.553 QEMU NVMe Ctrl (12340 ): 18703 I/Os completed (+1580) 00:12:53.553 QEMU NVMe Ctrl (12341 ): 19866 I/Os completed (+1783) 00:12:53.553 00:12:54.487 QEMU NVMe Ctrl (12340 ): 20267 I/Os completed (+1564) 00:12:54.487 QEMU NVMe Ctrl (12341 ): 21534 I/Os completed (+1668) 00:12:54.487 00:12:54.487 10:01:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:54.487 10:01:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:54.487 10:01:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:54.487 10:01:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:54.487 [2024-06-10 10:01:43.851942] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:54.487 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:54.487 [2024-06-10 10:01:43.854412] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.487 [2024-06-10 10:01:43.854658] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.487 [2024-06-10 10:01:43.854715] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.487 [2024-06-10 10:01:43.854747] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.487 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:54.487 [2024-06-10 10:01:43.858126] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.487 [2024-06-10 10:01:43.858193] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.487 [2024-06-10 10:01:43.858226] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.487 [2024-06-10 10:01:43.858250] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.487 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:12:54.487 EAL: Scan for (pci) bus failed. 00:12:54.487 10:01:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:54.487 10:01:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:54.487 [2024-06-10 10:01:43.885159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:54.487 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:54.487 [2024-06-10 10:01:43.887151] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.487 [2024-06-10 10:01:43.887218] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.487 [2024-06-10 10:01:43.887255] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.487 [2024-06-10 10:01:43.887287] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.487 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:54.487 [2024-06-10 10:01:43.890267] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.487 [2024-06-10 10:01:43.890324] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.487 [2024-06-10 10:01:43.890352] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.487 [2024-06-10 10:01:43.890384] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.487 10:01:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:54.487 10:01:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:54.487 10:01:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:54.487 10:01:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:54.487 10:01:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:54.745 10:01:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:54.745 10:01:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:54.745 10:01:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:54.745 10:01:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:54.745 10:01:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:54.745 Attaching to 0000:00:10.0 00:12:54.745 Attached to 0000:00:10.0 00:12:54.745 10:01:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:54.745 10:01:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:54.745 10:01:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:54.745 Attaching to 0000:00:11.0 00:12:54.745 Attached to 0000:00:11.0 00:12:55.310 QEMU NVMe Ctrl (12340 ): 980 I/Os completed (+980) 00:12:55.310 QEMU NVMe Ctrl (12341 ): 932 I/Os completed (+932) 00:12:55.310 00:12:56.242 QEMU NVMe Ctrl (12340 ): 2687 I/Os completed (+1707) 00:12:56.242 QEMU NVMe Ctrl (12341 ): 2711 I/Os completed (+1779) 00:12:56.242 00:12:57.615 QEMU NVMe Ctrl (12340 ): 4275 I/Os completed (+1588) 00:12:57.615 QEMU NVMe Ctrl (12341 ): 4518 I/Os completed (+1807) 00:12:57.615 00:12:58.583 QEMU NVMe Ctrl (12340 ): 6023 I/Os completed (+1748) 00:12:58.583 QEMU NVMe Ctrl (12341 ): 6355 I/Os completed (+1837) 00:12:58.583 00:12:59.517 QEMU NVMe Ctrl (12340 ): 7623 I/Os completed (+1600) 00:12:59.517 QEMU NVMe Ctrl (12341 ): 8124 I/Os completed (+1769) 00:12:59.517 00:13:00.451 QEMU NVMe Ctrl (12340 ): 9403 I/Os completed (+1780) 00:13:00.451 QEMU NVMe Ctrl (12341 ): 9958 I/Os completed (+1834) 00:13:00.451 00:13:01.398 QEMU NVMe Ctrl (12340 ): 11002 I/Os completed (+1599) 00:13:01.398 QEMU NVMe Ctrl (12341 ): 11755 I/Os completed (+1797) 00:13:01.398 00:13:02.334 QEMU NVMe Ctrl (12340 ): 12556 I/Os completed (+1554) 00:13:02.334 QEMU NVMe Ctrl (12341 ): 13465 I/Os completed (+1710) 00:13:02.334 00:13:03.268 QEMU NVMe Ctrl (12340 ): 14184 I/Os completed (+1628) 00:13:03.268 QEMU NVMe Ctrl (12341 ): 15204 I/Os completed (+1739) 00:13:03.268 00:13:04.645 QEMU NVMe Ctrl (12340 ): 15864 I/Os completed (+1680) 00:13:04.645 QEMU NVMe Ctrl (12341 ): 16982 I/Os completed (+1778) 00:13:04.645 00:13:05.212 QEMU NVMe Ctrl (12340 ): 17539 I/Os completed (+1675) 00:13:05.212 QEMU NVMe Ctrl (12341 ): 18748 I/Os completed (+1766) 00:13:05.212 00:13:06.589 QEMU NVMe Ctrl (12340 ): 19439 I/Os completed (+1900) 00:13:06.589 QEMU NVMe Ctrl (12341 ): 20675 I/Os completed (+1927) 00:13:06.589 00:13:06.848 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:06.848 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:06.848 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:06.848 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:06.848 [2024-06-10 10:01:56.191789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:06.848 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:06.848 [2024-06-10 10:01:56.193739] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.848 [2024-06-10 10:01:56.193940] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.848 [2024-06-10 10:01:56.193986] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.848 [2024-06-10 10:01:56.194013] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.848 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:06.848 [2024-06-10 10:01:56.199901] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.848 [2024-06-10 10:01:56.200018] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.848 [2024-06-10 10:01:56.200076] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.848 [2024-06-10 10:01:56.200117] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.848 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:13:06.848 EAL: Scan for (pci) bus failed. 00:13:06.848 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:06.848 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:06.848 [2024-06-10 10:01:56.224122] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:06.848 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:06.848 [2024-06-10 10:01:56.226333] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.848 [2024-06-10 10:01:56.226549] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.848 [2024-06-10 10:01:56.226783] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.848 [2024-06-10 10:01:56.226968] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.848 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:06.848 [2024-06-10 10:01:56.233910] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.848 [2024-06-10 10:01:56.233978] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.848 [2024-06-10 10:01:56.234009] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.848 [2024-06-10 10:01:56.234049] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.848 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:06.848 EAL: Scan for (pci) bus failed. 00:13:06.848 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:06.848 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:06.848 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:06.848 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:06.848 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:07.107 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:07.107 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:07.107 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:07.107 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:07.107 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:07.107 Attaching to 0000:00:10.0 00:13:07.107 Attached to 0000:00:10.0 00:13:07.107 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:07.107 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:07.107 10:01:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:07.107 Attaching to 0000:00:11.0 00:13:07.107 Attached to 0000:00:11.0 00:13:07.107 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:07.107 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:07.107 [2024-06-10 10:01:56.546400] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:13:19.313 10:02:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:19.313 10:02:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:19.313 10:02:08 sw_hotplug -- common/autotest_common.sh@716 -- # time=43.05 00:13:19.313 10:02:08 sw_hotplug -- common/autotest_common.sh@717 -- # echo 43.05 00:13:19.313 10:02:08 sw_hotplug -- common/autotest_common.sh@719 -- # return 0 00:13:19.313 10:02:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.05 00:13:19.313 10:02:08 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.05 2 00:13:19.313 remove_attach_helper took 43.05s to complete (handling 2 nvme drive(s)) 10:02:08 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:13:25.919 10:02:14 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 73273 00:13:25.919 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (73273) - No such process 00:13:25.919 10:02:14 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 73273 00:13:25.919 10:02:14 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:13:25.919 10:02:14 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:13:25.919 10:02:14 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:13:25.919 10:02:14 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=73806 00:13:25.919 10:02:14 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:13:25.919 10:02:14 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 73806 00:13:25.919 10:02:14 sw_hotplug -- common/autotest_common.sh@830 -- # '[' -z 73806 ']' 00:13:25.919 10:02:14 sw_hotplug -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.919 10:02:14 sw_hotplug -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:25.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.919 10:02:14 sw_hotplug -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.919 10:02:14 sw_hotplug -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:25.919 10:02:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:25.919 10:02:14 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:25.919 [2024-06-10 10:02:14.667415] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:13:25.919 [2024-06-10 10:02:14.667575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73806 ] 00:13:25.919 [2024-06-10 10:02:14.842492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.919 [2024-06-10 10:02:15.073461] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.487 10:02:15 sw_hotplug -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:26.487 10:02:15 sw_hotplug -- common/autotest_common.sh@863 -- # return 0 00:13:26.487 10:02:15 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:26.487 10:02:15 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:26.487 10:02:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:26.487 10:02:15 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:26.487 10:02:15 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:13:26.487 10:02:15 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:26.487 10:02:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:26.487 10:02:15 sw_hotplug -- common/autotest_common.sh@706 -- # local cmd_es=0 00:13:26.487 10:02:15 sw_hotplug -- common/autotest_common.sh@708 -- # [[ -t 0 ]] 00:13:26.487 10:02:15 sw_hotplug -- common/autotest_common.sh@708 -- # exec 00:13:26.487 10:02:15 sw_hotplug -- common/autotest_common.sh@710 -- # local time=0 TIMEFORMAT=%2R 00:13:26.487 10:02:15 sw_hotplug -- common/autotest_common.sh@716 -- # remove_attach_helper 3 6 true 00:13:26.487 10:02:15 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:26.487 10:02:15 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:26.487 10:02:15 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:26.487 10:02:15 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:26.487 10:02:15 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:33.046 10:02:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:33.046 10:02:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:33.046 10:02:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:33.047 10:02:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:33.047 10:02:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:33.047 10:02:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:33.047 10:02:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:33.047 10:02:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:33.047 10:02:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:33.047 10:02:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:33.047 10:02:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:33.047 10:02:21 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:33.047 10:02:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:33.047 10:02:21 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:33.047 [2024-06-10 10:02:21.920621] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:33.047 [2024-06-10 10:02:21.923447] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:33.047 [2024-06-10 10:02:21.923517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.047 [2024-06-10 10:02:21.923548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.047 [2024-06-10 10:02:21.923614] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:33.047 [2024-06-10 10:02:21.923660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.047 [2024-06-10 10:02:21.923685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.047 [2024-06-10 10:02:21.923703] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:33.047 [2024-06-10 10:02:21.923720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.047 [2024-06-10 10:02:21.923735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.047 [2024-06-10 10:02:21.923752] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:33.047 [2024-06-10 10:02:21.923766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.047 [2024-06-10 10:02:21.923785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.047 10:02:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:33.047 10:02:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:33.047 [2024-06-10 10:02:22.320571] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:33.047 [2024-06-10 10:02:22.323662] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:33.047 [2024-06-10 10:02:22.323834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.047 [2024-06-10 10:02:22.324005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.047 [2024-06-10 10:02:22.324210] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:33.047 [2024-06-10 10:02:22.324472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.047 [2024-06-10 10:02:22.324502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.047 [2024-06-10 10:02:22.324526] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:33.047 [2024-06-10 10:02:22.324542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.047 [2024-06-10 10:02:22.324559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.047 [2024-06-10 10:02:22.324574] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:33.047 [2024-06-10 10:02:22.324591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.047 [2024-06-10 10:02:22.324605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.047 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:33.047 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:33.047 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:33.047 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:33.047 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:33.047 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:33.047 10:02:22 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:33.047 10:02:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:33.047 10:02:22 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:33.047 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:33.047 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:33.304 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:33.304 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:33.304 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:33.304 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:33.304 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:33.304 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:33.304 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:33.304 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:33.304 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:33.304 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:33.304 10:02:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:45.522 10:02:34 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:45.522 10:02:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:45.522 10:02:34 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:45.522 10:02:34 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:45.522 10:02:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:45.522 [2024-06-10 10:02:34.920788] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:45.522 [2024-06-10 10:02:34.924519] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.522 [2024-06-10 10:02:34.924847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.522 [2024-06-10 10:02:34.925094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.522 [2024-06-10 10:02:34.925358] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.522 [2024-06-10 10:02:34.925589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.522 [2024-06-10 10:02:34.925713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.522 [2024-06-10 10:02:34.926007] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.522 [2024-06-10 10:02:34.926169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.522 [2024-06-10 10:02:34.926390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.522 [2024-06-10 10:02:34.926613] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.522 [2024-06-10 10:02:34.926837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.522 [2024-06-10 10:02:34.927058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.522 10:02:34 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:45.522 10:02:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:46.089 [2024-06-10 10:02:35.320805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:46.089 [2024-06-10 10:02:35.323861] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.089 [2024-06-10 10:02:35.324033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:46.089 [2024-06-10 10:02:35.324195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.089 [2024-06-10 10:02:35.324409] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.089 [2024-06-10 10:02:35.324629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:46.089 [2024-06-10 10:02:35.324813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.089 [2024-06-10 10:02:35.324894] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.089 [2024-06-10 10:02:35.324942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:46.089 [2024-06-10 10:02:35.325118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.089 [2024-06-10 10:02:35.325329] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.089 [2024-06-10 10:02:35.325456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:46.089 [2024-06-10 10:02:35.325612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.089 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:46.089 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:46.089 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:46.089 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:46.089 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:46.089 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:46.089 10:02:35 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.089 10:02:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:46.089 10:02:35 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.089 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:46.089 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:46.346 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:46.346 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:46.346 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:46.346 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:46.346 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:46.346 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:46.346 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:46.346 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:46.346 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:46.346 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:46.346 10:02:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:58.543 10:02:47 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:58.543 10:02:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:58.543 10:02:47 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:58.543 [2024-06-10 10:02:47.921034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:58.543 [2024-06-10 10:02:47.924061] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.543 [2024-06-10 10:02:47.924231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.543 [2024-06-10 10:02:47.924387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.543 [2024-06-10 10:02:47.924590] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.543 [2024-06-10 10:02:47.924736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.543 [2024-06-10 10:02:47.924909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.543 [2024-06-10 10:02:47.925097] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.543 [2024-06-10 10:02:47.925303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.543 [2024-06-10 10:02:47.925450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.543 [2024-06-10 10:02:47.925677] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.543 [2024-06-10 10:02:47.925823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.543 [2024-06-10 10:02:47.925994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:58.543 10:02:47 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:58.543 10:02:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:58.543 10:02:47 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:58.543 10:02:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:59.110 [2024-06-10 10:02:48.421030] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:59.110 [2024-06-10 10:02:48.423736] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.110 [2024-06-10 10:02:48.423788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.110 [2024-06-10 10:02:48.423815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:59.110 [2024-06-10 10:02:48.423842] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.110 [2024-06-10 10:02:48.423860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.110 [2024-06-10 10:02:48.423875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:59.110 [2024-06-10 10:02:48.423905] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.110 [2024-06-10 10:02:48.423919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.110 [2024-06-10 10:02:48.423935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:59.110 [2024-06-10 10:02:48.423951] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.110 [2024-06-10 10:02:48.423969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.110 [2024-06-10 10:02:48.423983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:59.110 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:59.110 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:59.110 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:59.110 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:59.110 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:59.110 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:59.110 10:02:48 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:59.110 10:02:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:59.110 10:02:48 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:59.110 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:59.110 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:59.368 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:59.368 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:59.368 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:59.368 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:59.368 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:59.368 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:59.368 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:59.368 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:59.368 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:59.368 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:59.368 10:02:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:11.568 10:03:00 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.568 10:03:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:11.568 10:03:00 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:11.568 10:03:00 sw_hotplug -- common/autotest_common.sh@716 -- # time=45.05 00:14:11.568 10:03:00 sw_hotplug -- common/autotest_common.sh@717 -- # echo 45.05 00:14:11.568 10:03:00 sw_hotplug -- common/autotest_common.sh@719 -- # return 0 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.05 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.05 2 00:14:11.568 remove_attach_helper took 45.05s to complete (handling 2 nvme drive(s)) 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:14:11.568 10:03:00 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.568 10:03:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:11.568 10:03:00 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:11.568 10:03:00 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.568 10:03:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:11.568 10:03:00 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:11.568 10:03:00 sw_hotplug -- common/autotest_common.sh@706 -- # local cmd_es=0 00:14:11.568 10:03:00 sw_hotplug -- common/autotest_common.sh@708 -- # [[ -t 0 ]] 00:14:11.568 10:03:00 sw_hotplug -- common/autotest_common.sh@708 -- # exec 00:14:11.568 10:03:00 sw_hotplug -- common/autotest_common.sh@710 -- # local time=0 TIMEFORMAT=%2R 00:14:11.568 10:03:00 sw_hotplug -- common/autotest_common.sh@716 -- # remove_attach_helper 3 6 true 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:11.568 10:03:00 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:18.124 10:03:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:18.124 10:03:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:18.124 10:03:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:18.124 10:03:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:18.124 10:03:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:18.124 10:03:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:18.124 10:03:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:18.124 10:03:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:18.124 10:03:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:18.124 10:03:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:18.124 10:03:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:18.124 10:03:06 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:18.124 10:03:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:18.124 [2024-06-10 10:03:07.000505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:18.124 [2024-06-10 10:03:07.002330] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.124 [2024-06-10 10:03:07.002387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.124 [2024-06-10 10:03:07.002412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.124 [2024-06-10 10:03:07.002442] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.124 [2024-06-10 10:03:07.002466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.124 [2024-06-10 10:03:07.002483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.124 [2024-06-10 10:03:07.002499] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.124 [2024-06-10 10:03:07.002515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.124 [2024-06-10 10:03:07.002530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.124 [2024-06-10 10:03:07.002547] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.124 [2024-06-10 10:03:07.002560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.124 [2024-06-10 10:03:07.002577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.124 10:03:07 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:18.124 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:18.124 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:18.124 [2024-06-10 10:03:07.500524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:18.124 [2024-06-10 10:03:07.503340] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.124 [2024-06-10 10:03:07.503391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.124 [2024-06-10 10:03:07.503419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.124 [2024-06-10 10:03:07.503445] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.124 [2024-06-10 10:03:07.503464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.124 [2024-06-10 10:03:07.503479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.124 [2024-06-10 10:03:07.503496] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.124 [2024-06-10 10:03:07.503511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.124 [2024-06-10 10:03:07.503526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.124 [2024-06-10 10:03:07.503542] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.124 [2024-06-10 10:03:07.503558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.124 [2024-06-10 10:03:07.503572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.124 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:18.124 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:18.124 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:18.124 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:18.124 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:18.124 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:18.124 10:03:07 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:18.124 10:03:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:18.124 10:03:07 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:18.124 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:18.124 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:18.382 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:18.382 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:18.382 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:18.382 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:18.382 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:18.382 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:18.382 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:18.382 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:18.382 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:18.382 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:18.382 10:03:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:30.584 10:03:19 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:30.584 10:03:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:30.584 10:03:19 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:30.584 10:03:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:30.584 10:03:19 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:30.584 10:03:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:30.584 [2024-06-10 10:03:20.000735] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:30.584 [2024-06-10 10:03:20.003831] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.584 [2024-06-10 10:03:20.003999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.584 [2024-06-10 10:03:20.004118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.584 [2024-06-10 10:03:20.004234] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.584 [2024-06-10 10:03:20.004332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.584 [2024-06-10 10:03:20.004423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.584 [2024-06-10 10:03:20.004509] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.584 [2024-06-10 10:03:20.004587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.584 [2024-06-10 10:03:20.004683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.584 [2024-06-10 10:03:20.004786] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.584 [2024-06-10 10:03:20.004867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.584 [2024-06-10 10:03:20.004963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.584 10:03:20 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:30.584 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:30.584 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:31.152 [2024-06-10 10:03:20.400741] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:31.152 [2024-06-10 10:03:20.402736] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.152 [2024-06-10 10:03:20.402870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.152 [2024-06-10 10:03:20.402964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.152 [2024-06-10 10:03:20.403056] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.152 [2024-06-10 10:03:20.403141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.152 [2024-06-10 10:03:20.403238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.152 [2024-06-10 10:03:20.403356] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.152 [2024-06-10 10:03:20.403382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.152 [2024-06-10 10:03:20.403401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.152 [2024-06-10 10:03:20.403418] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.152 [2024-06-10 10:03:20.403435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.153 [2024-06-10 10:03:20.403449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.153 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:31.153 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:31.153 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:31.153 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:31.153 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:31.153 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:31.153 10:03:20 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.153 10:03:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:31.153 10:03:20 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.153 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:31.153 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:31.411 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:31.411 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:31.411 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:31.411 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:31.411 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:31.411 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:31.411 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:31.411 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:31.411 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:31.411 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:31.411 10:03:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:43.614 10:03:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:43.614 10:03:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:43.614 10:03:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:43.614 10:03:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:43.614 10:03:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:43.614 10:03:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:43.614 10:03:32 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:43.614 10:03:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 10:03:32 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:43.614 10:03:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:43.614 10:03:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:43.614 10:03:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:43.614 10:03:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:43.614 10:03:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:43.614 10:03:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:43.614 [2024-06-10 10:03:33.001067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:43.614 [2024-06-10 10:03:33.003105] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.614 [2024-06-10 10:03:33.003172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.614 [2024-06-10 10:03:33.003205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.614 [2024-06-10 10:03:33.003252] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.614 [2024-06-10 10:03:33.003293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.614 [2024-06-10 10:03:33.003326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.614 [2024-06-10 10:03:33.003352] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.614 [2024-06-10 10:03:33.003379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.614 [2024-06-10 10:03:33.003401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.614 [2024-06-10 10:03:33.003436] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.614 [2024-06-10 10:03:33.003463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.614 [2024-06-10 10:03:33.003484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.614 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:43.614 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:43.614 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:43.614 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:43.614 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:43.614 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:43.614 10:03:33 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:43.614 10:03:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:43.614 10:03:33 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:43.614 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:43.614 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:44.180 [2024-06-10 10:03:33.501074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:44.180 [2024-06-10 10:03:33.503041] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.180 [2024-06-10 10:03:33.503127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.180 [2024-06-10 10:03:33.503153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.180 [2024-06-10 10:03:33.503181] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.180 [2024-06-10 10:03:33.503200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.180 [2024-06-10 10:03:33.503215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.180 [2024-06-10 10:03:33.503232] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.180 [2024-06-10 10:03:33.503262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.180 [2024-06-10 10:03:33.503280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.180 [2024-06-10 10:03:33.503297] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.180 [2024-06-10 10:03:33.503314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.180 [2024-06-10 10:03:33.503328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.180 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:44.180 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:44.180 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:44.180 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:44.180 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:44.180 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:44.180 10:03:33 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:44.180 10:03:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:44.180 10:03:33 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:44.180 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:44.180 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:44.438 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:44.438 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:44.438 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:44.438 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:44.438 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:44.438 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:44.438 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:44.438 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:44.438 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:44.438 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:44.438 10:03:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:56.635 10:03:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:56.635 10:03:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:56.635 10:03:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:56.635 10:03:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:56.635 10:03:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:56.635 10:03:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:56.635 10:03:45 sw_hotplug -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:56.635 10:03:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:56.635 10:03:45 sw_hotplug -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:56.635 10:03:45 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:56.635 10:03:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:56.635 10:03:45 sw_hotplug -- common/autotest_common.sh@716 -- # time=45.07 00:14:56.635 10:03:45 sw_hotplug -- common/autotest_common.sh@717 -- # echo 45.07 00:14:56.635 10:03:45 sw_hotplug -- common/autotest_common.sh@719 -- # return 0 00:14:56.635 10:03:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.07 00:14:56.635 10:03:45 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.07 2 00:14:56.635 remove_attach_helper took 45.07s to complete (handling 2 nvme drive(s)) 10:03:45 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:56.635 10:03:45 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 73806 00:14:56.635 10:03:45 sw_hotplug -- common/autotest_common.sh@949 -- # '[' -z 73806 ']' 00:14:56.635 10:03:45 sw_hotplug -- common/autotest_common.sh@953 -- # kill -0 73806 00:14:56.635 10:03:45 sw_hotplug -- common/autotest_common.sh@954 -- # uname 00:14:56.635 10:03:45 sw_hotplug -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:56.635 10:03:45 sw_hotplug -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 73806 00:14:56.635 10:03:46 sw_hotplug -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:56.635 killing process with pid 73806 00:14:56.635 10:03:46 sw_hotplug -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:56.635 10:03:46 sw_hotplug -- common/autotest_common.sh@967 -- # echo 'killing process with pid 73806' 00:14:56.635 10:03:46 sw_hotplug -- common/autotest_common.sh@968 -- # kill 73806 00:14:56.635 10:03:46 sw_hotplug -- common/autotest_common.sh@973 -- # wait 73806 00:14:59.165 10:03:48 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:59.165 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:59.424 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:59.424 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:59.683 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:59.683 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:59.683 00:14:59.683 real 2m31.413s 00:14:59.683 user 1m51.308s 00:14:59.683 sys 0m19.803s 00:14:59.683 10:03:49 sw_hotplug -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:59.683 10:03:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:59.683 ************************************ 00:14:59.683 END TEST sw_hotplug 00:14:59.683 ************************************ 00:14:59.683 10:03:49 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:14:59.683 10:03:49 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:59.683 10:03:49 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:59.683 10:03:49 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:59.683 10:03:49 -- common/autotest_common.sh@10 -- # set +x 00:14:59.683 ************************************ 00:14:59.683 START TEST nvme_xnvme 00:14:59.683 ************************************ 00:14:59.683 10:03:49 nvme_xnvme -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:59.941 * Looking for test storage... 00:14:59.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:59.941 10:03:49 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:59.941 10:03:49 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.941 10:03:49 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.941 10:03:49 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.942 10:03:49 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.942 10:03:49 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.942 10:03:49 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.942 10:03:49 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:59.942 10:03:49 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.942 10:03:49 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:14:59.942 10:03:49 nvme_xnvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:59.942 10:03:49 nvme_xnvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:59.942 10:03:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.942 ************************************ 00:14:59.942 START TEST xnvme_to_malloc_dd_copy 00:14:59.942 ************************************ 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1124 -- # malloc_to_xnvme_copy 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:59.942 10:03:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:59.942 { 00:14:59.942 "subsystems": [ 00:14:59.942 { 00:14:59.942 "subsystem": "bdev", 00:14:59.942 "config": [ 00:14:59.942 { 00:14:59.942 "params": { 00:14:59.942 "block_size": 512, 00:14:59.942 "num_blocks": 2097152, 00:14:59.942 "name": "malloc0" 00:14:59.942 }, 00:14:59.942 "method": "bdev_malloc_create" 00:14:59.942 }, 00:14:59.942 { 00:14:59.942 "params": { 00:14:59.942 "io_mechanism": "libaio", 00:14:59.942 "filename": "/dev/nullb0", 00:14:59.942 "name": "null0" 00:14:59.942 }, 00:14:59.942 "method": "bdev_xnvme_create" 00:14:59.942 }, 00:14:59.942 { 00:14:59.942 "method": "bdev_wait_for_examine" 00:14:59.942 } 00:14:59.942 ] 00:14:59.942 } 00:14:59.942 ] 00:14:59.942 } 00:14:59.942 [2024-06-10 10:03:49.350099] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:14:59.942 [2024-06-10 10:03:49.350235] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75176 ] 00:15:00.199 [2024-06-10 10:03:49.611136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.457 [2024-06-10 10:03:49.837532] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.103  Copying: 166/1024 [MB] (166 MBps) Copying: 337/1024 [MB] (170 MBps) Copying: 508/1024 [MB] (170 MBps) Copying: 664/1024 [MB] (155 MBps) Copying: 829/1024 [MB] (165 MBps) Copying: 987/1024 [MB] (157 MBps) Copying: 1024/1024 [MB] (average 164 MBps) 00:15:12.103 00:15:12.103 10:04:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:15:12.103 10:04:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:15:12.103 10:04:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:12.103 10:04:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:12.103 { 00:15:12.103 "subsystems": [ 00:15:12.103 { 00:15:12.103 "subsystem": "bdev", 00:15:12.103 "config": [ 00:15:12.103 { 00:15:12.103 "params": { 00:15:12.103 "block_size": 512, 00:15:12.103 "num_blocks": 2097152, 00:15:12.103 "name": "malloc0" 00:15:12.103 }, 00:15:12.103 "method": "bdev_malloc_create" 00:15:12.103 }, 00:15:12.103 { 00:15:12.103 "params": { 00:15:12.103 "io_mechanism": "libaio", 00:15:12.103 "filename": "/dev/nullb0", 00:15:12.103 "name": "null0" 00:15:12.103 }, 00:15:12.103 "method": "bdev_xnvme_create" 00:15:12.103 }, 00:15:12.103 { 00:15:12.103 "method": "bdev_wait_for_examine" 00:15:12.103 } 00:15:12.103 ] 00:15:12.103 } 00:15:12.103 ] 00:15:12.103 } 00:15:12.103 [2024-06-10 10:04:00.976945] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:15:12.103 [2024-06-10 10:04:00.977117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75319 ] 00:15:12.103 [2024-06-10 10:04:01.147036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.103 [2024-06-10 10:04:01.332300] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.202  Copying: 168/1024 [MB] (168 MBps) Copying: 344/1024 [MB] (175 MBps) Copying: 510/1024 [MB] (165 MBps) Copying: 685/1024 [MB] (175 MBps) Copying: 862/1024 [MB] (176 MBps) Copying: 1024/1024 [MB] (average 173 MBps) 00:15:23.202 00:15:23.202 10:04:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:15:23.202 10:04:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:23.202 10:04:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:15:23.202 10:04:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:15:23.202 10:04:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:23.202 10:04:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:23.202 { 00:15:23.202 "subsystems": [ 00:15:23.202 { 00:15:23.202 "subsystem": "bdev", 00:15:23.202 "config": [ 00:15:23.202 { 00:15:23.202 "params": { 00:15:23.202 "block_size": 512, 00:15:23.202 "num_blocks": 2097152, 00:15:23.202 "name": "malloc0" 00:15:23.202 }, 00:15:23.202 "method": "bdev_malloc_create" 00:15:23.202 }, 00:15:23.202 { 00:15:23.202 "params": { 00:15:23.202 "io_mechanism": "io_uring", 00:15:23.202 "filename": "/dev/nullb0", 00:15:23.202 "name": "null0" 00:15:23.202 }, 00:15:23.202 "method": "bdev_xnvme_create" 00:15:23.202 }, 00:15:23.202 { 00:15:23.202 "method": "bdev_wait_for_examine" 00:15:23.202 } 00:15:23.202 ] 00:15:23.202 } 00:15:23.202 ] 00:15:23.202 } 00:15:23.202 [2024-06-10 10:04:12.007700] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:15:23.202 [2024-06-10 10:04:12.007847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75441 ] 00:15:23.202 [2024-06-10 10:04:12.173072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.202 [2024-06-10 10:04:12.401466] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.262  Copying: 174/1024 [MB] (174 MBps) Copying: 344/1024 [MB] (169 MBps) Copying: 515/1024 [MB] (170 MBps) Copying: 688/1024 [MB] (172 MBps) Copying: 861/1024 [MB] (173 MBps) Copying: 1024/1024 [MB] (average 172 MBps) 00:15:34.262 00:15:34.262 10:04:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:15:34.262 10:04:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:15:34.262 10:04:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:34.262 10:04:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:34.262 { 00:15:34.262 "subsystems": [ 00:15:34.262 { 00:15:34.262 "subsystem": "bdev", 00:15:34.262 "config": [ 00:15:34.262 { 00:15:34.262 "params": { 00:15:34.262 "block_size": 512, 00:15:34.262 "num_blocks": 2097152, 00:15:34.262 "name": "malloc0" 00:15:34.262 }, 00:15:34.262 "method": "bdev_malloc_create" 00:15:34.262 }, 00:15:34.262 { 00:15:34.262 "params": { 00:15:34.262 "io_mechanism": "io_uring", 00:15:34.262 "filename": "/dev/nullb0", 00:15:34.262 "name": "null0" 00:15:34.262 }, 00:15:34.262 "method": "bdev_xnvme_create" 00:15:34.262 }, 00:15:34.262 { 00:15:34.262 "method": "bdev_wait_for_examine" 00:15:34.262 } 00:15:34.262 ] 00:15:34.262 } 00:15:34.262 ] 00:15:34.262 } 00:15:34.262 [2024-06-10 10:04:23.308436] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:15:34.262 [2024-06-10 10:04:23.308605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75567 ] 00:15:34.262 [2024-06-10 10:04:23.481038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.262 [2024-06-10 10:04:23.710121] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.345  Copying: 182/1024 [MB] (182 MBps) Copying: 358/1024 [MB] (176 MBps) Copying: 535/1024 [MB] (176 MBps) Copying: 717/1024 [MB] (182 MBps) Copying: 897/1024 [MB] (180 MBps) Copying: 1024/1024 [MB] (average 179 MBps) 00:15:45.345 00:15:45.345 10:04:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:15:45.345 10:04:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:15:45.345 00:15:45.345 real 0m44.918s 00:15:45.345 user 0m39.521s 00:15:45.345 sys 0m4.780s 00:15:45.345 10:04:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:45.345 10:04:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:45.345 ************************************ 00:15:45.345 END TEST xnvme_to_malloc_dd_copy 00:15:45.345 ************************************ 00:15:45.345 10:04:34 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:45.345 10:04:34 nvme_xnvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:15:45.345 10:04:34 nvme_xnvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:45.345 10:04:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.345 ************************************ 00:15:45.345 START TEST xnvme_bdevperf 00:15:45.345 ************************************ 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1124 -- # xnvme_bdevperf 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:45.345 10:04:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:45.345 { 00:15:45.345 "subsystems": [ 00:15:45.345 { 00:15:45.345 "subsystem": "bdev", 00:15:45.345 "config": [ 00:15:45.345 { 00:15:45.345 "params": { 00:15:45.345 "io_mechanism": "libaio", 00:15:45.345 "filename": "/dev/nullb0", 00:15:45.345 "name": "null0" 00:15:45.345 }, 00:15:45.345 "method": "bdev_xnvme_create" 00:15:45.345 }, 00:15:45.345 { 00:15:45.345 "method": "bdev_wait_for_examine" 00:15:45.345 } 00:15:45.345 ] 00:15:45.345 } 00:15:45.345 ] 00:15:45.345 } 00:15:45.345 [2024-06-10 10:04:34.327241] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:15:45.345 [2024-06-10 10:04:34.328031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75712 ] 00:15:45.345 [2024-06-10 10:04:34.499753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.345 [2024-06-10 10:04:34.685454] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.603 Running I/O for 5 seconds... 00:15:50.872 00:15:50.872 Latency(us) 00:15:50.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.872 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:50.872 null0 : 5.00 116884.61 456.58 0.00 0.00 544.09 184.32 882.50 00:15:50.872 =================================================================================================================== 00:15:50.872 Total : 116884.61 456.58 0.00 0.00 544.09 184.32 882.50 00:15:51.809 10:04:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:15:51.809 10:04:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:51.809 10:04:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:15:51.809 10:04:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:15:51.809 10:04:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:51.809 10:04:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:51.809 { 00:15:51.809 "subsystems": [ 00:15:51.809 { 00:15:51.809 "subsystem": "bdev", 00:15:51.809 "config": [ 00:15:51.809 { 00:15:51.809 "params": { 00:15:51.809 "io_mechanism": "io_uring", 00:15:51.809 "filename": "/dev/nullb0", 00:15:51.809 "name": "null0" 00:15:51.809 }, 00:15:51.809 "method": "bdev_xnvme_create" 00:15:51.809 }, 00:15:51.809 { 00:15:51.809 "method": "bdev_wait_for_examine" 00:15:51.809 } 00:15:51.809 ] 00:15:51.809 } 00:15:51.809 ] 00:15:51.809 } 00:15:51.809 [2024-06-10 10:04:41.241528] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:15:51.809 [2024-06-10 10:04:41.241747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75792 ] 00:15:52.068 [2024-06-10 10:04:41.414258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.326 [2024-06-10 10:04:41.603456] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.584 Running I/O for 5 seconds... 00:15:57.848 00:15:57.848 Latency(us) 00:15:57.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.848 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:57.848 null0 : 5.00 141251.73 551.76 0.00 0.00 449.54 247.62 752.17 00:15:57.848 =================================================================================================================== 00:15:57.848 Total : 141251.73 551.76 0.00 0.00 449.54 247.62 752.17 00:15:58.782 10:04:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:15:58.782 10:04:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:15:58.783 00:15:58.783 real 0m13.887s 00:15:58.783 user 0m10.989s 00:15:58.783 sys 0m2.670s 00:15:58.783 10:04:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:58.783 ************************************ 00:15:58.783 END TEST xnvme_bdevperf 00:15:58.783 ************************************ 00:15:58.783 10:04:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:58.783 ************************************ 00:15:58.783 END TEST nvme_xnvme 00:15:58.783 ************************************ 00:15:58.783 00:15:58.783 real 0m58.989s 00:15:58.783 user 0m50.575s 00:15:58.783 sys 0m7.561s 00:15:58.783 10:04:48 nvme_xnvme -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:58.783 10:04:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:58.783 10:04:48 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:58.783 10:04:48 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:58.783 10:04:48 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:58.783 10:04:48 -- common/autotest_common.sh@10 -- # set +x 00:15:58.783 ************************************ 00:15:58.783 START TEST blockdev_xnvme 00:15:58.783 ************************************ 00:15:58.783 10:04:48 blockdev_xnvme -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:58.783 * Looking for test storage... 00:15:58.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75932 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 75932 00:15:58.783 10:04:48 blockdev_xnvme -- common/autotest_common.sh@830 -- # '[' -z 75932 ']' 00:15:58.783 10:04:48 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:58.783 10:04:48 blockdev_xnvme -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.783 10:04:48 blockdev_xnvme -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:58.783 10:04:48 blockdev_xnvme -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.783 10:04:48 blockdev_xnvme -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:58.783 10:04:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:59.042 [2024-06-10 10:04:48.397714] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:15:59.042 [2024-06-10 10:04:48.397921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75932 ] 00:15:59.300 [2024-06-10 10:04:48.577138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.300 [2024-06-10 10:04:48.782917] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.235 10:04:49 blockdev_xnvme -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:00.235 10:04:49 blockdev_xnvme -- common/autotest_common.sh@863 -- # return 0 00:16:00.235 10:04:49 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:16:00.235 10:04:49 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:16:00.235 10:04:49 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:16:00.235 10:04:49 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:16:00.235 10:04:49 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:00.493 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:00.752 Waiting for block devices as requested 00:16:00.752 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:00.752 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:00.752 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:01.011 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:06.301 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local nvme bdf 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n1 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n1 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1661 -- # local device=nvme2n1 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n2 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1661 -- # local device=nvme2n2 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n3 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1661 -- # local device=nvme2n3 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1672 -- # is_block_zoned nvme3c3n1 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1661 -- # local device=nvme3c3n1 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1672 -- # is_block_zoned nvme3n1 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1661 -- # local device=nvme3n1 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:16:06.301 nvme0n1 00:16:06.301 nvme1n1 00:16:06.301 nvme2n1 00:16:06.301 nvme2n2 00:16:06.301 nvme2n3 00:16:06.301 nvme3n1 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.301 10:04:55 blockdev_xnvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:16:06.301 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:16:06.302 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d5013278-b864-4b83-8024-42dd7a85feea"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d5013278-b864-4b83-8024-42dd7a85feea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1dbe63fc-591e-4bcd-9971-160bf6fdce39"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1dbe63fc-591e-4bcd-9971-160bf6fdce39",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "1b4537db-3154-45c0-8f18-85bf1606cd7f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1b4537db-3154-45c0-8f18-85bf1606cd7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "4846f965-f80b-422d-88bf-c27cf4e2fe76"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4846f965-f80b-422d-88bf-c27cf4e2fe76",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "8a7b55ac-08c8-4c54-a1b7-f0729202d06d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8a7b55ac-08c8-4c54-a1b7-f0729202d06d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a7e6aa45-0d7e-4f05-9729-1c44731484e5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a7e6aa45-0d7e-4f05-9729-1c44731484e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:16:06.302 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:16:06.302 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:16:06.302 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:16:06.302 10:04:55 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 75932 00:16:06.302 10:04:55 blockdev_xnvme -- common/autotest_common.sh@949 -- # '[' -z 75932 ']' 00:16:06.302 10:04:55 blockdev_xnvme -- common/autotest_common.sh@953 -- # kill -0 75932 00:16:06.302 10:04:55 blockdev_xnvme -- common/autotest_common.sh@954 -- # uname 00:16:06.302 10:04:55 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:06.302 10:04:55 blockdev_xnvme -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 75932 00:16:06.302 10:04:55 blockdev_xnvme -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:06.302 10:04:55 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:06.302 10:04:55 blockdev_xnvme -- common/autotest_common.sh@967 -- # echo 'killing process with pid 75932' 00:16:06.302 killing process with pid 75932 00:16:06.302 10:04:55 blockdev_xnvme -- common/autotest_common.sh@968 -- # kill 75932 00:16:06.302 10:04:55 blockdev_xnvme -- common/autotest_common.sh@973 -- # wait 75932 00:16:08.893 10:04:57 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:08.893 10:04:57 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:08.893 10:04:57 blockdev_xnvme -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:16:08.893 10:04:57 blockdev_xnvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:08.893 10:04:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:08.893 ************************************ 00:16:08.893 START TEST bdev_hello_world 00:16:08.893 ************************************ 00:16:08.893 10:04:57 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:08.893 [2024-06-10 10:04:57.972693] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:16:08.893 [2024-06-10 10:04:57.972859] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76301 ] 00:16:08.893 [2024-06-10 10:04:58.148712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.893 [2024-06-10 10:04:58.378909] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.460 [2024-06-10 10:04:58.756027] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:09.460 [2024-06-10 10:04:58.756097] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:16:09.460 [2024-06-10 10:04:58.756135] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:09.460 [2024-06-10 10:04:58.758239] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:09.460 [2024-06-10 10:04:58.758660] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:09.460 [2024-06-10 10:04:58.758698] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:09.460 [2024-06-10 10:04:58.758938] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:09.460 00:16:09.460 [2024-06-10 10:04:58.758975] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:10.395 00:16:10.395 real 0m1.965s 00:16:10.395 user 0m1.629s 00:16:10.395 sys 0m0.220s 00:16:10.395 10:04:59 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:10.395 10:04:59 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:10.395 ************************************ 00:16:10.395 END TEST bdev_hello_world 00:16:10.395 ************************************ 00:16:10.395 10:04:59 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:16:10.395 10:04:59 blockdev_xnvme -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:10.395 10:04:59 blockdev_xnvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:10.395 10:04:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:10.395 ************************************ 00:16:10.395 START TEST bdev_bounds 00:16:10.395 ************************************ 00:16:10.395 10:04:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1124 -- # bdev_bounds '' 00:16:10.395 10:04:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=76339 00:16:10.395 10:04:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:10.395 Process bdevio pid: 76339 00:16:10.395 10:04:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 76339' 00:16:10.395 10:04:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:10.395 10:04:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 76339 00:16:10.395 10:04:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@830 -- # '[' -z 76339 ']' 00:16:10.395 10:04:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.395 10:04:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:10.395 10:04:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.395 10:04:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:10.395 10:04:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:10.654 [2024-06-10 10:04:59.979612] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:16:10.654 [2024-06-10 10:04:59.979803] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76339 ] 00:16:10.654 [2024-06-10 10:05:00.143693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:10.913 [2024-06-10 10:05:00.329301] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.913 [2024-06-10 10:05:00.329368] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.913 [2024-06-10 10:05:00.329370] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.481 10:05:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:11.481 10:05:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@863 -- # return 0 00:16:11.481 10:05:00 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:11.740 I/O targets: 00:16:11.740 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:16:11.740 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:16:11.740 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:11.740 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:11.740 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:11.740 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:16:11.740 00:16:11.740 00:16:11.740 CUnit - A unit testing framework for C - Version 2.1-3 00:16:11.740 http://cunit.sourceforge.net/ 00:16:11.740 00:16:11.740 00:16:11.740 Suite: bdevio tests on: nvme3n1 00:16:11.740 Test: blockdev write read block ...passed 00:16:11.740 Test: blockdev write zeroes read block ...passed 00:16:11.740 Test: blockdev write zeroes read no split ...passed 00:16:11.740 Test: blockdev write zeroes read split ...passed 00:16:11.740 Test: blockdev write zeroes read split partial ...passed 00:16:11.740 Test: blockdev reset ...passed 00:16:11.740 Test: blockdev write read 8 blocks ...passed 00:16:11.740 Test: blockdev write read size > 128k ...passed 00:16:11.740 Test: blockdev write read invalid size ...passed 00:16:11.740 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:11.740 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:11.740 Test: blockdev write read max offset ...passed 00:16:11.740 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:11.740 Test: blockdev writev readv 8 blocks ...passed 00:16:11.740 Test: blockdev writev readv 30 x 1block ...passed 00:16:11.740 Test: blockdev writev readv block ...passed 00:16:11.740 Test: blockdev writev readv size > 128k ...passed 00:16:11.740 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:11.740 Test: blockdev comparev and writev ...passed 00:16:11.740 Test: blockdev nvme passthru rw ...passed 00:16:11.740 Test: blockdev nvme passthru vendor specific ...passed 00:16:11.740 Test: blockdev nvme admin passthru ...passed 00:16:11.740 Test: blockdev copy ...passed 00:16:11.740 Suite: bdevio tests on: nvme2n3 00:16:11.740 Test: blockdev write read block ...passed 00:16:11.740 Test: blockdev write zeroes read block ...passed 00:16:11.740 Test: blockdev write zeroes read no split ...passed 00:16:11.740 Test: blockdev write zeroes read split ...passed 00:16:11.740 Test: blockdev write zeroes read split partial ...passed 00:16:11.740 Test: blockdev reset ...passed 00:16:11.740 Test: blockdev write read 8 blocks ...passed 00:16:11.740 Test: blockdev write read size > 128k ...passed 00:16:11.740 Test: blockdev write read invalid size ...passed 00:16:11.740 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:11.740 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:11.740 Test: blockdev write read max offset ...passed 00:16:11.740 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:11.740 Test: blockdev writev readv 8 blocks ...passed 00:16:11.740 Test: blockdev writev readv 30 x 1block ...passed 00:16:11.740 Test: blockdev writev readv block ...passed 00:16:11.740 Test: blockdev writev readv size > 128k ...passed 00:16:11.740 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:11.740 Test: blockdev comparev and writev ...passed 00:16:11.740 Test: blockdev nvme passthru rw ...passed 00:16:11.740 Test: blockdev nvme passthru vendor specific ...passed 00:16:11.740 Test: blockdev nvme admin passthru ...passed 00:16:11.740 Test: blockdev copy ...passed 00:16:11.740 Suite: bdevio tests on: nvme2n2 00:16:11.740 Test: blockdev write read block ...passed 00:16:11.740 Test: blockdev write zeroes read block ...passed 00:16:11.740 Test: blockdev write zeroes read no split ...passed 00:16:11.740 Test: blockdev write zeroes read split ...passed 00:16:11.740 Test: blockdev write zeroes read split partial ...passed 00:16:11.740 Test: blockdev reset ...passed 00:16:11.740 Test: blockdev write read 8 blocks ...passed 00:16:11.740 Test: blockdev write read size > 128k ...passed 00:16:11.740 Test: blockdev write read invalid size ...passed 00:16:11.740 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:11.740 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:11.740 Test: blockdev write read max offset ...passed 00:16:11.740 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:11.740 Test: blockdev writev readv 8 blocks ...passed 00:16:11.740 Test: blockdev writev readv 30 x 1block ...passed 00:16:11.740 Test: blockdev writev readv block ...passed 00:16:11.740 Test: blockdev writev readv size > 128k ...passed 00:16:11.740 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:11.740 Test: blockdev comparev and writev ...passed 00:16:11.740 Test: blockdev nvme passthru rw ...passed 00:16:11.740 Test: blockdev nvme passthru vendor specific ...passed 00:16:11.740 Test: blockdev nvme admin passthru ...passed 00:16:11.740 Test: blockdev copy ...passed 00:16:11.740 Suite: bdevio tests on: nvme2n1 00:16:11.740 Test: blockdev write read block ...passed 00:16:11.740 Test: blockdev write zeroes read block ...passed 00:16:11.740 Test: blockdev write zeroes read no split ...passed 00:16:11.740 Test: blockdev write zeroes read split ...passed 00:16:11.999 Test: blockdev write zeroes read split partial ...passed 00:16:11.999 Test: blockdev reset ...passed 00:16:11.999 Test: blockdev write read 8 blocks ...passed 00:16:11.999 Test: blockdev write read size > 128k ...passed 00:16:11.999 Test: blockdev write read invalid size ...passed 00:16:11.999 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:11.999 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:11.999 Test: blockdev write read max offset ...passed 00:16:11.999 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:11.999 Test: blockdev writev readv 8 blocks ...passed 00:16:11.999 Test: blockdev writev readv 30 x 1block ...passed 00:16:11.999 Test: blockdev writev readv block ...passed 00:16:11.999 Test: blockdev writev readv size > 128k ...passed 00:16:11.999 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:11.999 Test: blockdev comparev and writev ...passed 00:16:11.999 Test: blockdev nvme passthru rw ...passed 00:16:11.999 Test: blockdev nvme passthru vendor specific ...passed 00:16:11.999 Test: blockdev nvme admin passthru ...passed 00:16:11.999 Test: blockdev copy ...passed 00:16:11.999 Suite: bdevio tests on: nvme1n1 00:16:11.999 Test: blockdev write read block ...passed 00:16:11.999 Test: blockdev write zeroes read block ...passed 00:16:11.999 Test: blockdev write zeroes read no split ...passed 00:16:11.999 Test: blockdev write zeroes read split ...passed 00:16:11.999 Test: blockdev write zeroes read split partial ...passed 00:16:11.999 Test: blockdev reset ...passed 00:16:11.999 Test: blockdev write read 8 blocks ...passed 00:16:11.999 Test: blockdev write read size > 128k ...passed 00:16:11.999 Test: blockdev write read invalid size ...passed 00:16:11.999 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:11.999 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:11.999 Test: blockdev write read max offset ...passed 00:16:11.999 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:11.999 Test: blockdev writev readv 8 blocks ...passed 00:16:11.999 Test: blockdev writev readv 30 x 1block ...passed 00:16:11.999 Test: blockdev writev readv block ...passed 00:16:11.999 Test: blockdev writev readv size > 128k ...passed 00:16:11.999 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:11.999 Test: blockdev comparev and writev ...passed 00:16:11.999 Test: blockdev nvme passthru rw ...passed 00:16:11.999 Test: blockdev nvme passthru vendor specific ...passed 00:16:11.999 Test: blockdev nvme admin passthru ...passed 00:16:11.999 Test: blockdev copy ...passed 00:16:11.999 Suite: bdevio tests on: nvme0n1 00:16:11.999 Test: blockdev write read block ...passed 00:16:11.999 Test: blockdev write zeroes read block ...passed 00:16:11.999 Test: blockdev write zeroes read no split ...passed 00:16:11.999 Test: blockdev write zeroes read split ...passed 00:16:11.999 Test: blockdev write zeroes read split partial ...passed 00:16:11.999 Test: blockdev reset ...passed 00:16:11.999 Test: blockdev write read 8 blocks ...passed 00:16:11.999 Test: blockdev write read size > 128k ...passed 00:16:11.999 Test: blockdev write read invalid size ...passed 00:16:11.999 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:11.999 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:11.999 Test: blockdev write read max offset ...passed 00:16:11.999 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:11.999 Test: blockdev writev readv 8 blocks ...passed 00:16:11.999 Test: blockdev writev readv 30 x 1block ...passed 00:16:11.999 Test: blockdev writev readv block ...passed 00:16:11.999 Test: blockdev writev readv size > 128k ...passed 00:16:11.999 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:11.999 Test: blockdev comparev and writev ...passed 00:16:11.999 Test: blockdev nvme passthru rw ...passed 00:16:11.999 Test: blockdev nvme passthru vendor specific ...passed 00:16:11.999 Test: blockdev nvme admin passthru ...passed 00:16:11.999 Test: blockdev copy ...passed 00:16:11.999 00:16:11.999 Run Summary: Type Total Ran Passed Failed Inactive 00:16:11.999 suites 6 6 n/a 0 0 00:16:11.999 tests 138 138 138 0 0 00:16:11.999 asserts 780 780 780 0 n/a 00:16:11.999 00:16:11.999 Elapsed time = 1.126 seconds 00:16:11.999 0 00:16:12.000 10:05:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 76339 00:16:12.000 10:05:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@949 -- # '[' -z 76339 ']' 00:16:12.000 10:05:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # kill -0 76339 00:16:12.000 10:05:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # uname 00:16:12.000 10:05:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:12.000 10:05:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 76339 00:16:12.000 10:05:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:12.000 10:05:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:12.000 killing process with pid 76339 00:16:12.000 10:05:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@967 -- # echo 'killing process with pid 76339' 00:16:12.000 10:05:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # kill 76339 00:16:12.000 10:05:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # wait 76339 00:16:13.378 10:05:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:16:13.378 00:16:13.378 real 0m2.709s 00:16:13.378 user 0m6.440s 00:16:13.378 sys 0m0.344s 00:16:13.378 10:05:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:13.378 10:05:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:13.378 ************************************ 00:16:13.378 END TEST bdev_bounds 00:16:13.378 ************************************ 00:16:13.378 10:05:02 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:16:13.378 10:05:02 blockdev_xnvme -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:16:13.378 10:05:02 blockdev_xnvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:13.378 10:05:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:13.378 ************************************ 00:16:13.378 START TEST bdev_nbd 00:16:13.378 ************************************ 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1124 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=76403 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 76403 /var/tmp/spdk-nbd.sock 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@830 -- # '[' -z 76403 ']' 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:13.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:13.378 10:05:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:13.378 [2024-06-10 10:05:02.748432] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:16:13.378 [2024-06-10 10:05:02.748573] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.639 [2024-06-10 10:05:02.923480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.898 [2024-06-10 10:05:03.218443] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.466 10:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:14.466 10:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@863 -- # return 0 00:16:14.466 10:05:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:16:14.466 10:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:14.466 10:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:14.466 10:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:14.466 10:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:16:14.466 10:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:14.466 10:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:14.466 10:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:14.466 10:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:14.466 10:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:14.466 10:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:14.466 10:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:14.466 10:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:14.725 1+0 records in 00:16:14.725 1+0 records out 00:16:14.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388028 s, 10.6 MB/s 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:14.725 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:14.984 1+0 records in 00:16:14.984 1+0 records out 00:16:14.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512111 s, 8.0 MB/s 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:14.984 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd2 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd2 /proc/partitions 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:15.243 1+0 records in 00:16:15.243 1+0 records out 00:16:15.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504699 s, 8.1 MB/s 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:15.243 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd3 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd3 /proc/partitions 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:15.502 1+0 records in 00:16:15.502 1+0 records out 00:16:15.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595909 s, 6.9 MB/s 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:15.502 10:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd4 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd4 /proc/partitions 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:15.761 1+0 records in 00:16:15.761 1+0 records out 00:16:15.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531684 s, 7.7 MB/s 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:15.761 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:16:16.327 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:16:16.327 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:16:16.327 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:16:16.327 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd5 00:16:16.327 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:16:16.327 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:16.327 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:16.327 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd5 /proc/partitions 00:16:16.327 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:16:16.327 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:16:16.327 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:16:16.328 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:16.328 1+0 records in 00:16:16.328 1+0 records out 00:16:16.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674567 s, 6.1 MB/s 00:16:16.328 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.328 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:16:16.328 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.328 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:16:16.328 10:05:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:16:16.328 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:16.328 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:16.328 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:16.586 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:16.586 { 00:16:16.586 "nbd_device": "/dev/nbd0", 00:16:16.586 "bdev_name": "nvme0n1" 00:16:16.586 }, 00:16:16.586 { 00:16:16.586 "nbd_device": "/dev/nbd1", 00:16:16.586 "bdev_name": "nvme1n1" 00:16:16.586 }, 00:16:16.586 { 00:16:16.586 "nbd_device": "/dev/nbd2", 00:16:16.586 "bdev_name": "nvme2n1" 00:16:16.586 }, 00:16:16.586 { 00:16:16.586 "nbd_device": "/dev/nbd3", 00:16:16.586 "bdev_name": "nvme2n2" 00:16:16.586 }, 00:16:16.586 { 00:16:16.586 "nbd_device": "/dev/nbd4", 00:16:16.586 "bdev_name": "nvme2n3" 00:16:16.586 }, 00:16:16.586 { 00:16:16.586 "nbd_device": "/dev/nbd5", 00:16:16.586 "bdev_name": "nvme3n1" 00:16:16.586 } 00:16:16.586 ]' 00:16:16.586 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:16.586 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:16.586 { 00:16:16.586 "nbd_device": "/dev/nbd0", 00:16:16.586 "bdev_name": "nvme0n1" 00:16:16.586 }, 00:16:16.586 { 00:16:16.586 "nbd_device": "/dev/nbd1", 00:16:16.586 "bdev_name": "nvme1n1" 00:16:16.586 }, 00:16:16.586 { 00:16:16.586 "nbd_device": "/dev/nbd2", 00:16:16.586 "bdev_name": "nvme2n1" 00:16:16.586 }, 00:16:16.586 { 00:16:16.586 "nbd_device": "/dev/nbd3", 00:16:16.586 "bdev_name": "nvme2n2" 00:16:16.586 }, 00:16:16.586 { 00:16:16.586 "nbd_device": "/dev/nbd4", 00:16:16.586 "bdev_name": "nvme2n3" 00:16:16.586 }, 00:16:16.586 { 00:16:16.586 "nbd_device": "/dev/nbd5", 00:16:16.586 "bdev_name": "nvme3n1" 00:16:16.586 } 00:16:16.586 ]' 00:16:16.586 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:16.586 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:16:16.586 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:16.586 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:16:16.586 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:16.586 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:16.586 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:16.587 10:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:16.845 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:16.845 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:16.845 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:16.845 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:16.845 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:16.845 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:16.845 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:16.845 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:16.845 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:16.845 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:17.103 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:17.103 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:17.103 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:17.103 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:17.103 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:17.103 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:17.103 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:17.103 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:17.103 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:17.103 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:17.701 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:17.701 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:17.701 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:17.701 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:17.701 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:17.701 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:17.701 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:17.701 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:17.701 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:17.701 10:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:17.701 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:17.701 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:17.701 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:17.701 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:17.966 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:17.966 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:17.966 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:17.966 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:17.966 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:17.966 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:18.223 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:18.223 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:18.223 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:18.223 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.223 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.223 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:18.223 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:18.223 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.223 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.223 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:18.480 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:18.480 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:18.480 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:18.480 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.480 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.480 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:18.480 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:18.480 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.480 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:18.480 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:18.480 10:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:18.739 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:18.998 /dev/nbd0 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:18.999 1+0 records in 00:16:18.999 1+0 records out 00:16:18.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000760583 s, 5.4 MB/s 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:18.999 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:16:19.258 /dev/nbd1 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.258 1+0 records in 00:16:19.258 1+0 records out 00:16:19.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000789398 s, 5.2 MB/s 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:19.258 10:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:16:19.517 /dev/nbd10 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd10 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd10 /proc/partitions 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.777 1+0 records in 00:16:19.777 1+0 records out 00:16:19.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588689 s, 7.0 MB/s 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:19.777 10:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:16:20.035 /dev/nbd11 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd11 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd11 /proc/partitions 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:20.035 1+0 records in 00:16:20.035 1+0 records out 00:16:20.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694226 s, 5.9 MB/s 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:20.035 10:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:16:20.294 /dev/nbd12 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd12 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd12 /proc/partitions 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:20.295 1+0 records in 00:16:20.295 1+0 records out 00:16:20.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00094512 s, 4.3 MB/s 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:20.295 10:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:20.554 /dev/nbd13 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd13 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd13 /proc/partitions 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:20.554 1+0 records in 00:16:20.554 1+0 records out 00:16:20.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534277 s, 7.7 MB/s 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:20.554 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:20.812 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:20.812 { 00:16:20.812 "nbd_device": "/dev/nbd0", 00:16:20.812 "bdev_name": "nvme0n1" 00:16:20.812 }, 00:16:20.812 { 00:16:20.812 "nbd_device": "/dev/nbd1", 00:16:20.812 "bdev_name": "nvme1n1" 00:16:20.812 }, 00:16:20.812 { 00:16:20.812 "nbd_device": "/dev/nbd10", 00:16:20.812 "bdev_name": "nvme2n1" 00:16:20.812 }, 00:16:20.812 { 00:16:20.812 "nbd_device": "/dev/nbd11", 00:16:20.812 "bdev_name": "nvme2n2" 00:16:20.812 }, 00:16:20.812 { 00:16:20.812 "nbd_device": "/dev/nbd12", 00:16:20.812 "bdev_name": "nvme2n3" 00:16:20.812 }, 00:16:20.812 { 00:16:20.812 "nbd_device": "/dev/nbd13", 00:16:20.812 "bdev_name": "nvme3n1" 00:16:20.812 } 00:16:20.812 ]' 00:16:20.812 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:20.812 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:20.812 { 00:16:20.812 "nbd_device": "/dev/nbd0", 00:16:20.812 "bdev_name": "nvme0n1" 00:16:20.812 }, 00:16:20.812 { 00:16:20.812 "nbd_device": "/dev/nbd1", 00:16:20.812 "bdev_name": "nvme1n1" 00:16:20.812 }, 00:16:20.812 { 00:16:20.812 "nbd_device": "/dev/nbd10", 00:16:20.812 "bdev_name": "nvme2n1" 00:16:20.812 }, 00:16:20.812 { 00:16:20.812 "nbd_device": "/dev/nbd11", 00:16:20.812 "bdev_name": "nvme2n2" 00:16:20.812 }, 00:16:20.813 { 00:16:20.813 "nbd_device": "/dev/nbd12", 00:16:20.813 "bdev_name": "nvme2n3" 00:16:20.813 }, 00:16:20.813 { 00:16:20.813 "nbd_device": "/dev/nbd13", 00:16:20.813 "bdev_name": "nvme3n1" 00:16:20.813 } 00:16:20.813 ]' 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:21.070 /dev/nbd1 00:16:21.070 /dev/nbd10 00:16:21.070 /dev/nbd11 00:16:21.070 /dev/nbd12 00:16:21.070 /dev/nbd13' 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:21.070 /dev/nbd1 00:16:21.070 /dev/nbd10 00:16:21.070 /dev/nbd11 00:16:21.070 /dev/nbd12 00:16:21.070 /dev/nbd13' 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:21.070 256+0 records in 00:16:21.070 256+0 records out 00:16:21.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0066133 s, 159 MB/s 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:21.070 256+0 records in 00:16:21.070 256+0 records out 00:16:21.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146954 s, 7.1 MB/s 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:21.070 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:21.328 256+0 records in 00:16:21.328 256+0 records out 00:16:21.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176018 s, 6.0 MB/s 00:16:21.328 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:21.328 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:21.586 256+0 records in 00:16:21.586 256+0 records out 00:16:21.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15641 s, 6.7 MB/s 00:16:21.587 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:21.587 10:05:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:21.587 256+0 records in 00:16:21.587 256+0 records out 00:16:21.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132274 s, 7.9 MB/s 00:16:21.587 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:21.587 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:21.845 256+0 records in 00:16:21.845 256+0 records out 00:16:21.845 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139714 s, 7.5 MB/s 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:21.845 256+0 records in 00:16:21.845 256+0 records out 00:16:21.845 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149448 s, 7.0 MB/s 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:21.845 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:22.103 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:22.103 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:22.103 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:22.104 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:22.104 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:22.104 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:22.104 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.104 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:22.363 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:22.363 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:22.363 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:22.363 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:22.363 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:22.363 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:22.363 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:22.363 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:22.363 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.363 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:22.621 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:22.622 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:22.622 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:22.622 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:22.622 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:22.622 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:22.622 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:22.622 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:22.622 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.622 10:05:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:22.880 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:22.880 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:22.880 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:22.880 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:22.880 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:22.880 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:22.880 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:22.880 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:22.880 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.880 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:23.137 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:23.137 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:23.137 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:23.137 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:23.137 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:23.137 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:23.137 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:23.137 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:23.137 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:23.137 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:23.395 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:23.395 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:23.395 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:23.395 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:23.395 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:23.395 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:23.395 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:23.395 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:23.395 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:23.395 10:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:23.961 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:23.961 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:23.961 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:23.961 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:23.961 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:23.961 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:23.961 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:23.961 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:23.961 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:23.961 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:23.961 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:23.962 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:23.962 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:23.962 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:24.219 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:24.219 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:24.219 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:24.219 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:24.219 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:24.219 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:24.219 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:24.219 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:24.219 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:24.219 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:24.219 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:24.219 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:24.219 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:16:24.219 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:16:24.219 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:24.477 malloc_lvol_verify 00:16:24.477 10:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:24.735 b91acd8b-689a-4833-9975-3c85ec90448b 00:16:24.735 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:24.992 e95cb9ce-037e-436d-8566-2e6ac64504cc 00:16:24.992 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:25.251 /dev/nbd0 00:16:25.251 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:16:25.251 mke2fs 1.46.5 (30-Dec-2021) 00:16:25.251 Discarding device blocks: 0/4096 done 00:16:25.251 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:25.251 00:16:25.251 Allocating group tables: 0/1 done 00:16:25.251 Writing inode tables: 0/1 done 00:16:25.251 Creating journal (1024 blocks): done 00:16:25.251 Writing superblocks and filesystem accounting information: 0/1 done 00:16:25.251 00:16:25.251 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:16:25.251 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:25.251 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:25.251 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:25.251 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:25.251 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:25.251 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:25.251 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 76403 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@949 -- # '[' -z 76403 ']' 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # kill -0 76403 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # uname 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 76403 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:25.526 killing process with pid 76403 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@967 -- # echo 'killing process with pid 76403' 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # kill 76403 00:16:25.526 10:05:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # wait 76403 00:16:26.904 10:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:16:26.904 00:16:26.904 real 0m13.532s 00:16:26.904 user 0m19.228s 00:16:26.904 sys 0m4.417s 00:16:26.904 10:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:26.904 10:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:26.904 ************************************ 00:16:26.904 END TEST bdev_nbd 00:16:26.904 ************************************ 00:16:26.904 10:05:16 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:16:26.904 10:05:16 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:16:26.904 10:05:16 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:16:26.904 10:05:16 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:16:26.904 10:05:16 blockdev_xnvme -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:26.904 10:05:16 blockdev_xnvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:26.904 10:05:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.904 ************************************ 00:16:26.904 START TEST bdev_fio 00:16:26.904 ************************************ 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1124 -- # fio_test_suite '' 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:26.904 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local workload=verify 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local bdev_type=AIO 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local env_context= 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local fio_dir=/usr/src/fio 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -z verify ']' 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1294 -- # '[' -n '' ']' 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1298 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1300 -- # cat 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1312 -- # '[' verify == verify ']' 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # cat 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1322 -- # '[' AIO == AIO ']' 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # /usr/src/fio/fio --version 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # echo serialize_overlap=1 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n2]' 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n2 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n3]' 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n3 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:26.904 ************************************ 00:16:26.904 START TEST bdev_fio_rw_verify 00:16:26.904 ************************************ 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1355 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # local sanitizers 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # shift 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local asan_lib= 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # grep libasan 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:26.904 10:05:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # break 00:16:26.905 10:05:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:26.905 10:05:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:27.162 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:27.162 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:27.162 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:27.162 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:27.162 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:27.162 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:27.162 fio-3.35 00:16:27.162 Starting 6 threads 00:16:39.355 00:16:39.355 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=76836: Mon Jun 10 10:05:27 2024 00:16:39.355 read: IOPS=25.9k, BW=101MiB/s (106MB/s)(1014MiB/10001msec) 00:16:39.355 slat (usec): min=3, max=3553, avg= 7.01, stdev= 9.18 00:16:39.355 clat (usec): min=126, max=18000, avg=724.75, stdev=396.76 00:16:39.355 lat (usec): min=130, max=18011, avg=731.77, stdev=397.21 00:16:39.355 clat percentiles (usec): 00:16:39.355 | 50.000th=[ 725], 99.000th=[ 1614], 99.900th=[ 4621], 99.990th=[15008], 00:16:39.355 | 99.999th=[17957] 00:16:39.355 write: IOPS=26.2k, BW=102MiB/s (107MB/s)(1023MiB/10001msec); 0 zone resets 00:16:39.355 slat (usec): min=14, max=4527, avg=28.86, stdev=36.59 00:16:39.355 clat (usec): min=119, max=18136, avg=804.34, stdev=398.88 00:16:39.355 lat (usec): min=148, max=18204, avg=833.19, stdev=401.78 00:16:39.355 clat percentiles (usec): 00:16:39.355 | 50.000th=[ 799], 99.000th=[ 1729], 99.900th=[ 4621], 99.990th=[15401], 00:16:39.355 | 99.999th=[17957] 00:16:39.355 bw ( KiB/s): min=87400, max=125024, per=100.00%, avg=105204.68, stdev=1687.69, samples=114 00:16:39.355 iops : min=21850, max=31256, avg=26301.00, stdev=421.93, samples=114 00:16:39.355 lat (usec) : 250=1.97%, 500=14.74%, 750=31.97%, 1000=38.48% 00:16:39.355 lat (msec) : 2=12.25%, 4=0.42%, 10=0.14%, 20=0.03% 00:16:39.355 cpu : usr=59.57%, sys=26.90%, ctx=6449, majf=0, minf=22544 00:16:39.355 IO depths : 1=12.2%, 2=24.8%, 4=50.2%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:39.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.355 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.355 issued rwts: total=259524,261978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.355 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:39.355 00:16:39.355 Run status group 0 (all jobs): 00:16:39.355 READ: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=1014MiB (1063MB), run=10001-10001msec 00:16:39.355 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=1023MiB (1073MB), run=10001-10001msec 00:16:39.356 ----------------------------------------------------- 00:16:39.356 Suppressions used: 00:16:39.356 count bytes template 00:16:39.356 6 48 /usr/src/fio/parse.c 00:16:39.356 2277 218592 /usr/src/fio/iolog.c 00:16:39.356 1 8 libtcmalloc_minimal.so 00:16:39.356 1 904 libcrypto.so 00:16:39.356 ----------------------------------------------------- 00:16:39.356 00:16:39.356 00:16:39.356 real 0m12.281s 00:16:39.356 user 0m37.611s 00:16:39.356 sys 0m16.437s 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:39.356 ************************************ 00:16:39.356 END TEST bdev_fio_rw_verify 00:16:39.356 ************************************ 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local workload=trim 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local bdev_type= 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local env_context= 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local fio_dir=/usr/src/fio 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -z trim ']' 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1294 -- # '[' -n '' ']' 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1298 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1300 -- # cat 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1312 -- # '[' trim == verify ']' 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' trim == trim ']' 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # echo rw=trimwrite 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d5013278-b864-4b83-8024-42dd7a85feea"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d5013278-b864-4b83-8024-42dd7a85feea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1dbe63fc-591e-4bcd-9971-160bf6fdce39"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1dbe63fc-591e-4bcd-9971-160bf6fdce39",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "1b4537db-3154-45c0-8f18-85bf1606cd7f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1b4537db-3154-45c0-8f18-85bf1606cd7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "4846f965-f80b-422d-88bf-c27cf4e2fe76"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4846f965-f80b-422d-88bf-c27cf4e2fe76",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "8a7b55ac-08c8-4c54-a1b7-f0729202d06d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8a7b55ac-08c8-4c54-a1b7-f0729202d06d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a7e6aa45-0d7e-4f05-9729-1c44731484e5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a7e6aa45-0d7e-4f05-9729-1c44731484e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:39.356 /home/vagrant/spdk_repo/spdk 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:16:39.356 00:16:39.356 real 0m12.434s 00:16:39.356 user 0m37.701s 00:16:39.356 sys 0m16.500s 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:39.356 10:05:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:39.356 ************************************ 00:16:39.356 END TEST bdev_fio 00:16:39.356 ************************************ 00:16:39.356 10:05:28 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:39.356 10:05:28 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:39.356 10:05:28 blockdev_xnvme -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:16:39.356 10:05:28 blockdev_xnvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:39.356 10:05:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:39.356 ************************************ 00:16:39.356 START TEST bdev_verify 00:16:39.356 ************************************ 00:16:39.356 10:05:28 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:39.356 [2024-06-10 10:05:28.819788] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:16:39.356 [2024-06-10 10:05:28.819992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77009 ] 00:16:39.614 [2024-06-10 10:05:28.990701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:39.872 [2024-06-10 10:05:29.204979] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.872 [2024-06-10 10:05:29.204981] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.130 Running I/O for 5 seconds... 00:16:45.391 00:16:45.391 Latency(us) 00:16:45.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.391 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:45.391 Verification LBA range: start 0x0 length 0xa0000 00:16:45.391 nvme0n1 : 5.04 1626.46 6.35 0.00 0.00 78549.23 9711.24 68634.07 00:16:45.391 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:45.391 Verification LBA range: start 0xa0000 length 0xa0000 00:16:45.391 nvme0n1 : 5.08 1661.81 6.49 0.00 0.00 76878.93 8400.52 68634.07 00:16:45.391 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:45.391 Verification LBA range: start 0x0 length 0xbd0bd 00:16:45.391 nvme1n1 : 5.08 2735.18 10.68 0.00 0.00 46402.59 5689.72 58624.93 00:16:45.391 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:45.391 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:45.391 nvme1n1 : 5.09 2823.93 11.03 0.00 0.00 45112.29 5481.19 63867.81 00:16:45.391 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:45.391 Verification LBA range: start 0x0 length 0x80000 00:16:45.391 nvme2n1 : 5.07 1640.20 6.41 0.00 0.00 77461.55 7417.48 75783.45 00:16:45.391 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:45.391 Verification LBA range: start 0x80000 length 0x80000 00:16:45.391 nvme2n1 : 5.10 1683.10 6.57 0.00 0.00 75542.22 10247.45 73876.95 00:16:45.391 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:45.391 Verification LBA range: start 0x0 length 0x80000 00:16:45.391 nvme2n2 : 5.08 1636.86 6.39 0.00 0.00 77491.93 7477.06 66250.94 00:16:45.391 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:45.391 Verification LBA range: start 0x80000 length 0x80000 00:16:45.391 nvme2n2 : 5.09 1659.25 6.48 0.00 0.00 76489.31 16324.42 58624.93 00:16:45.391 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:45.391 Verification LBA range: start 0x0 length 0x80000 00:16:45.391 nvme2n3 : 5.08 1637.84 6.40 0.00 0.00 77294.58 8877.15 68634.07 00:16:45.391 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:45.391 Verification LBA range: start 0x80000 length 0x80000 00:16:45.391 nvme2n3 : 5.09 1658.62 6.48 0.00 0.00 76381.34 16086.11 68634.07 00:16:45.391 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:45.391 Verification LBA range: start 0x0 length 0x20000 00:16:45.391 nvme3n1 : 5.09 1635.85 6.39 0.00 0.00 77247.33 8698.41 73400.32 00:16:45.391 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:45.391 Verification LBA range: start 0x20000 length 0x20000 00:16:45.391 nvme3n1 : 5.08 1662.50 6.49 0.00 0.00 76056.33 8519.68 73876.95 00:16:45.391 =================================================================================================================== 00:16:45.391 Total : 22061.59 86.18 0.00 0.00 69068.62 5481.19 75783.45 00:16:46.768 00:16:46.768 real 0m7.303s 00:16:46.768 user 0m11.333s 00:16:46.768 sys 0m1.847s 00:16:46.768 10:05:36 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:46.768 10:05:36 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:46.768 ************************************ 00:16:46.768 END TEST bdev_verify 00:16:46.768 ************************************ 00:16:46.768 10:05:36 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:46.768 10:05:36 blockdev_xnvme -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:16:46.768 10:05:36 blockdev_xnvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:46.768 10:05:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:46.768 ************************************ 00:16:46.768 START TEST bdev_verify_big_io 00:16:46.768 ************************************ 00:16:46.768 10:05:36 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:46.768 [2024-06-10 10:05:36.150446] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:16:46.768 [2024-06-10 10:05:36.150599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77108 ] 00:16:47.027 [2024-06-10 10:05:36.324771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:47.285 [2024-06-10 10:05:36.554797] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.285 [2024-06-10 10:05:36.554805] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.851 Running I/O for 5 seconds... 00:16:54.407 00:16:54.407 Latency(us) 00:16:54.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.407 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:54.407 Verification LBA range: start 0x0 length 0xa000 00:16:54.407 nvme0n1 : 5.82 79.76 4.99 0.00 0.00 1568239.50 109623.85 3309687.16 00:16:54.407 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:54.407 Verification LBA range: start 0xa000 length 0xa000 00:16:54.407 nvme0n1 : 6.11 131.00 8.19 0.00 0.00 800226.73 38844.97 1662469.59 00:16:54.407 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:54.407 Verification LBA range: start 0x0 length 0xbd0b 00:16:54.407 nvme1n1 : 5.84 106.88 6.68 0.00 0.00 1138430.85 7536.64 2348810.24 00:16:54.407 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:54.407 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:54.407 nvme1n1 : 6.21 195.74 12.23 0.00 0.00 523941.98 3723.64 1227787.17 00:16:54.407 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:54.407 Verification LBA range: start 0x0 length 0x8000 00:16:54.407 nvme2n1 : 5.82 145.61 9.10 0.00 0.00 813662.60 104857.60 930372.89 00:16:54.407 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:54.407 Verification LBA range: start 0x8000 length 0x8000 00:16:54.407 nvme2n1 : 5.88 106.19 6.64 0.00 0.00 1157062.52 149660.39 1906501.82 00:16:54.407 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:54.407 Verification LBA range: start 0x0 length 0x8000 00:16:54.407 nvme2n2 : 5.83 135.91 8.49 0.00 0.00 848272.65 102951.10 1197283.14 00:16:54.407 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:54.407 Verification LBA range: start 0x8000 length 0x8000 00:16:54.407 nvme2n2 : 5.88 130.60 8.16 0.00 0.00 894055.02 151566.89 999006.95 00:16:54.407 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:54.407 Verification LBA range: start 0x0 length 0x8000 00:16:54.407 nvme2n3 : 5.84 178.01 11.13 0.00 0.00 635822.37 8043.05 1204909.15 00:16:54.407 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:54.407 Verification LBA range: start 0x8000 length 0x8000 00:16:54.407 nvme2n3 : 6.04 120.61 7.54 0.00 0.00 930014.93 56241.80 926559.88 00:16:54.407 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:54.407 Verification LBA range: start 0x0 length 0x2000 00:16:54.407 nvme3n1 : 5.85 147.82 9.24 0.00 0.00 744252.92 8638.84 1136275.08 00:16:54.407 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:54.407 Verification LBA range: start 0x2000 length 0x2000 00:16:54.407 nvme3n1 : 6.04 152.35 9.52 0.00 0.00 708291.85 82932.83 564324.54 00:16:54.407 =================================================================================================================== 00:16:54.407 Total : 1630.48 101.91 0.00 0.00 838023.02 3723.64 3309687.16 00:16:55.341 00:16:55.342 real 0m8.728s 00:16:55.342 user 0m15.571s 00:16:55.342 sys 0m0.569s 00:16:55.342 10:05:44 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:55.342 10:05:44 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.342 ************************************ 00:16:55.342 END TEST bdev_verify_big_io 00:16:55.342 ************************************ 00:16:55.342 10:05:44 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:55.342 10:05:44 blockdev_xnvme -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:16:55.342 10:05:44 blockdev_xnvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:55.342 10:05:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:55.342 ************************************ 00:16:55.342 START TEST bdev_write_zeroes 00:16:55.342 ************************************ 00:16:55.342 10:05:44 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:55.598 [2024-06-10 10:05:44.920862] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:16:55.598 [2024-06-10 10:05:44.921038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77226 ] 00:16:55.598 [2024-06-10 10:05:45.093021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.871 [2024-06-10 10:05:45.379168] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.437 Running I/O for 1 seconds... 00:16:57.374 00:16:57.374 Latency(us) 00:16:57.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.374 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:57.374 nvme0n1 : 1.01 10877.12 42.49 0.00 0.00 11753.26 7268.54 19541.64 00:16:57.374 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:57.375 nvme1n1 : 1.03 14982.83 58.53 0.00 0.00 8472.15 3991.74 16205.27 00:16:57.375 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:57.375 nvme2n1 : 1.02 10813.97 42.24 0.00 0.00 11754.75 7208.96 19779.96 00:16:57.375 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:57.375 nvme2n2 : 1.02 10791.85 42.16 0.00 0.00 11771.23 7179.17 19303.33 00:16:57.375 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:57.375 nvme2n3 : 1.02 10770.46 42.07 0.00 0.00 11782.74 7268.54 18588.39 00:16:57.375 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:57.375 nvme3n1 : 1.02 10749.04 41.99 0.00 0.00 11798.57 7328.12 18588.39 00:16:57.375 =================================================================================================================== 00:16:57.375 Total : 68985.27 269.47 0.00 0.00 11051.79 3991.74 19779.96 00:16:58.750 00:16:58.750 real 0m3.341s 00:16:58.750 user 0m2.570s 00:16:58.750 sys 0m0.587s 00:16:58.750 10:05:48 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:58.750 10:05:48 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:58.750 ************************************ 00:16:58.750 END TEST bdev_write_zeroes 00:16:58.750 ************************************ 00:16:58.750 10:05:48 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:58.750 10:05:48 blockdev_xnvme -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:16:58.750 10:05:48 blockdev_xnvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:58.750 10:05:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:58.750 ************************************ 00:16:58.750 START TEST bdev_json_nonenclosed 00:16:58.750 ************************************ 00:16:58.750 10:05:48 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:59.008 [2024-06-10 10:05:48.293047] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:16:59.008 [2024-06-10 10:05:48.293190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77290 ] 00:16:59.008 [2024-06-10 10:05:48.459202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.266 [2024-06-10 10:05:48.684619] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.266 [2024-06-10 10:05:48.684747] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:59.266 [2024-06-10 10:05:48.684781] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:59.266 [2024-06-10 10:05:48.684797] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:59.834 00:16:59.834 real 0m0.910s 00:16:59.834 user 0m0.687s 00:16:59.834 sys 0m0.116s 00:16:59.834 10:05:49 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:59.834 10:05:49 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:59.834 ************************************ 00:16:59.834 END TEST bdev_json_nonenclosed 00:16:59.834 ************************************ 00:16:59.834 10:05:49 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:59.834 10:05:49 blockdev_xnvme -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:16:59.834 10:05:49 blockdev_xnvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:59.834 10:05:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:59.834 ************************************ 00:16:59.834 START TEST bdev_json_nonarray 00:16:59.834 ************************************ 00:16:59.834 10:05:49 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:59.834 [2024-06-10 10:05:49.261731] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:16:59.834 [2024-06-10 10:05:49.261893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77317 ] 00:17:00.093 [2024-06-10 10:05:49.431400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.359 [2024-06-10 10:05:49.623178] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.359 [2024-06-10 10:05:49.623294] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:00.359 [2024-06-10 10:05:49.623322] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:00.359 [2024-06-10 10:05:49.623337] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:00.617 00:17:00.617 real 0m0.943s 00:17:00.617 user 0m0.711s 00:17:00.617 sys 0m0.121s 00:17:00.618 10:05:50 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:00.618 10:05:50 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:00.618 ************************************ 00:17:00.618 END TEST bdev_json_nonarray 00:17:00.618 ************************************ 00:17:00.875 10:05:50 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:17:00.875 10:05:50 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:17:00.875 10:05:50 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:17:00.875 10:05:50 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:17:00.875 10:05:50 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:17:00.875 10:05:50 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:00.875 10:05:50 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:00.875 10:05:50 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:17:00.875 10:05:50 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:17:00.875 10:05:50 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:17:00.875 10:05:50 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:17:00.875 10:05:50 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:01.441 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:02.007 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:02.007 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:02.007 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:02.265 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:02.265 00:17:02.265 real 1m3.442s 00:17:02.265 user 1m46.685s 00:17:02.265 sys 0m27.272s 00:17:02.265 10:05:51 blockdev_xnvme -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:02.265 ************************************ 00:17:02.265 END TEST blockdev_xnvme 00:17:02.265 ************************************ 00:17:02.265 10:05:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:02.265 10:05:51 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:02.265 10:05:51 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:17:02.265 10:05:51 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:02.265 10:05:51 -- common/autotest_common.sh@10 -- # set +x 00:17:02.265 ************************************ 00:17:02.265 START TEST ublk 00:17:02.265 ************************************ 00:17:02.265 10:05:51 ublk -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:02.265 * Looking for test storage... 00:17:02.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:02.265 10:05:51 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:02.265 10:05:51 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:02.265 10:05:51 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:02.265 10:05:51 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:02.265 10:05:51 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:02.265 10:05:51 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:02.265 10:05:51 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:02.265 10:05:51 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:02.265 10:05:51 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:02.265 10:05:51 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:17:02.265 10:05:51 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:17:02.265 10:05:51 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:17:02.265 10:05:51 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:17:02.265 10:05:51 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:17:02.265 10:05:51 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:17:02.265 10:05:51 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:17:02.265 10:05:51 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:17:02.265 10:05:51 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:17:02.265 10:05:51 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:17:02.265 10:05:51 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:17:02.265 10:05:51 ublk -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:17:02.265 10:05:51 ublk -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:02.265 10:05:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.265 ************************************ 00:17:02.265 START TEST test_save_ublk_config 00:17:02.265 ************************************ 00:17:02.265 10:05:51 ublk.test_save_ublk_config -- common/autotest_common.sh@1124 -- # test_save_config 00:17:02.265 10:05:51 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:17:02.265 10:05:51 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=77596 00:17:02.265 10:05:51 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:17:02.266 10:05:51 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:17:02.266 10:05:51 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 77596 00:17:02.266 10:05:51 ublk.test_save_ublk_config -- common/autotest_common.sh@830 -- # '[' -z 77596 ']' 00:17:02.266 10:05:51 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.266 10:05:51 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:02.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.266 10:05:51 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.266 10:05:51 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:02.266 10:05:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:02.524 [2024-06-10 10:05:51.901691] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:17:02.524 [2024-06-10 10:05:51.901860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77596 ] 00:17:02.819 [2024-06-10 10:05:52.082757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.077 [2024-06-10 10:05:52.348354] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.013 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:04.013 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@863 -- # return 0 00:17:04.013 10:05:53 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:17:04.013 10:05:53 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:17:04.013 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:04.013 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:04.013 [2024-06-10 10:05:53.216721] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:04.013 [2024-06-10 10:05:53.217872] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:04.013 malloc0 00:17:04.013 [2024-06-10 10:05:53.301005] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:04.013 [2024-06-10 10:05:53.301187] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:04.013 [2024-06-10 10:05:53.301218] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:04.013 [2024-06-10 10:05:53.301229] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:04.013 [2024-06-10 10:05:53.310002] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:04.013 [2024-06-10 10:05:53.310041] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:04.013 [2024-06-10 10:05:53.316899] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:04.013 [2024-06-10 10:05:53.317043] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:04.013 [2024-06-10 10:05:53.333797] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:04.013 0 00:17:04.013 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:04.013 10:05:53 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:17:04.013 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:04.013 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:04.272 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:04.272 10:05:53 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:17:04.272 "subsystems": [ 00:17:04.272 { 00:17:04.272 "subsystem": "keyring", 00:17:04.272 "config": [] 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "subsystem": "iobuf", 00:17:04.272 "config": [ 00:17:04.272 { 00:17:04.272 "method": "iobuf_set_options", 00:17:04.272 "params": { 00:17:04.272 "small_pool_count": 8192, 00:17:04.272 "large_pool_count": 1024, 00:17:04.272 "small_bufsize": 8192, 00:17:04.272 "large_bufsize": 135168 00:17:04.272 } 00:17:04.272 } 00:17:04.272 ] 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "subsystem": "sock", 00:17:04.272 "config": [ 00:17:04.272 { 00:17:04.272 "method": "sock_set_default_impl", 00:17:04.272 "params": { 00:17:04.272 "impl_name": "posix" 00:17:04.272 } 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "method": "sock_impl_set_options", 00:17:04.272 "params": { 00:17:04.272 "impl_name": "ssl", 00:17:04.272 "recv_buf_size": 4096, 00:17:04.272 "send_buf_size": 4096, 00:17:04.272 "enable_recv_pipe": true, 00:17:04.272 "enable_quickack": false, 00:17:04.272 "enable_placement_id": 0, 00:17:04.272 "enable_zerocopy_send_server": true, 00:17:04.272 "enable_zerocopy_send_client": false, 00:17:04.272 "zerocopy_threshold": 0, 00:17:04.272 "tls_version": 0, 00:17:04.272 "enable_ktls": false 00:17:04.272 } 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "method": "sock_impl_set_options", 00:17:04.272 "params": { 00:17:04.272 "impl_name": "posix", 00:17:04.272 "recv_buf_size": 2097152, 00:17:04.272 "send_buf_size": 2097152, 00:17:04.272 "enable_recv_pipe": true, 00:17:04.272 "enable_quickack": false, 00:17:04.272 "enable_placement_id": 0, 00:17:04.272 "enable_zerocopy_send_server": true, 00:17:04.272 "enable_zerocopy_send_client": false, 00:17:04.272 "zerocopy_threshold": 0, 00:17:04.272 "tls_version": 0, 00:17:04.272 "enable_ktls": false 00:17:04.272 } 00:17:04.272 } 00:17:04.272 ] 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "subsystem": "vmd", 00:17:04.272 "config": [] 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "subsystem": "accel", 00:17:04.272 "config": [ 00:17:04.272 { 00:17:04.272 "method": "accel_set_options", 00:17:04.272 "params": { 00:17:04.272 "small_cache_size": 128, 00:17:04.272 "large_cache_size": 16, 00:17:04.272 "task_count": 2048, 00:17:04.272 "sequence_count": 2048, 00:17:04.272 "buf_count": 2048 00:17:04.272 } 00:17:04.272 } 00:17:04.272 ] 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "subsystem": "bdev", 00:17:04.272 "config": [ 00:17:04.272 { 00:17:04.272 "method": "bdev_set_options", 00:17:04.272 "params": { 00:17:04.272 "bdev_io_pool_size": 65535, 00:17:04.272 "bdev_io_cache_size": 256, 00:17:04.272 "bdev_auto_examine": true, 00:17:04.272 "iobuf_small_cache_size": 128, 00:17:04.272 "iobuf_large_cache_size": 16 00:17:04.272 } 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "method": "bdev_raid_set_options", 00:17:04.272 "params": { 00:17:04.272 "process_window_size_kb": 1024 00:17:04.272 } 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "method": "bdev_iscsi_set_options", 00:17:04.272 "params": { 00:17:04.272 "timeout_sec": 30 00:17:04.272 } 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "method": "bdev_nvme_set_options", 00:17:04.272 "params": { 00:17:04.272 "action_on_timeout": "none", 00:17:04.272 "timeout_us": 0, 00:17:04.272 "timeout_admin_us": 0, 00:17:04.272 "keep_alive_timeout_ms": 10000, 00:17:04.272 "arbitration_burst": 0, 00:17:04.272 "low_priority_weight": 0, 00:17:04.272 "medium_priority_weight": 0, 00:17:04.272 "high_priority_weight": 0, 00:17:04.272 "nvme_adminq_poll_period_us": 10000, 00:17:04.272 "nvme_ioq_poll_period_us": 0, 00:17:04.272 "io_queue_requests": 0, 00:17:04.272 "delay_cmd_submit": true, 00:17:04.272 "transport_retry_count": 4, 00:17:04.272 "bdev_retry_count": 3, 00:17:04.272 "transport_ack_timeout": 0, 00:17:04.272 "ctrlr_loss_timeout_sec": 0, 00:17:04.272 "reconnect_delay_sec": 0, 00:17:04.272 "fast_io_fail_timeout_sec": 0, 00:17:04.272 "disable_auto_failback": false, 00:17:04.272 "generate_uuids": false, 00:17:04.272 "transport_tos": 0, 00:17:04.272 "nvme_error_stat": false, 00:17:04.272 "rdma_srq_size": 0, 00:17:04.272 "io_path_stat": false, 00:17:04.272 "allow_accel_sequence": false, 00:17:04.272 "rdma_max_cq_size": 0, 00:17:04.272 "rdma_cm_event_timeout_ms": 0, 00:17:04.272 "dhchap_digests": [ 00:17:04.272 "sha256", 00:17:04.272 "sha384", 00:17:04.272 "sha512" 00:17:04.272 ], 00:17:04.272 "dhchap_dhgroups": [ 00:17:04.272 "null", 00:17:04.272 "ffdhe2048", 00:17:04.272 "ffdhe3072", 00:17:04.272 "ffdhe4096", 00:17:04.272 "ffdhe6144", 00:17:04.272 "ffdhe8192" 00:17:04.272 ] 00:17:04.272 } 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "method": "bdev_nvme_set_hotplug", 00:17:04.272 "params": { 00:17:04.272 "period_us": 100000, 00:17:04.272 "enable": false 00:17:04.272 } 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "method": "bdev_malloc_create", 00:17:04.272 "params": { 00:17:04.272 "name": "malloc0", 00:17:04.272 "num_blocks": 8192, 00:17:04.272 "block_size": 4096, 00:17:04.272 "physical_block_size": 4096, 00:17:04.272 "uuid": "ff6be0bc-01d1-4d06-9845-8ed57a0d63de", 00:17:04.272 "optimal_io_boundary": 0 00:17:04.272 } 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "method": "bdev_wait_for_examine" 00:17:04.272 } 00:17:04.272 ] 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "subsystem": "scsi", 00:17:04.272 "config": null 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "subsystem": "scheduler", 00:17:04.272 "config": [ 00:17:04.272 { 00:17:04.272 "method": "framework_set_scheduler", 00:17:04.272 "params": { 00:17:04.272 "name": "static" 00:17:04.272 } 00:17:04.272 } 00:17:04.272 ] 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "subsystem": "vhost_scsi", 00:17:04.272 "config": [] 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "subsystem": "vhost_blk", 00:17:04.272 "config": [] 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "subsystem": "ublk", 00:17:04.272 "config": [ 00:17:04.272 { 00:17:04.272 "method": "ublk_create_target", 00:17:04.272 "params": { 00:17:04.272 "cpumask": "1" 00:17:04.272 } 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "method": "ublk_start_disk", 00:17:04.272 "params": { 00:17:04.272 "bdev_name": "malloc0", 00:17:04.272 "ublk_id": 0, 00:17:04.272 "num_queues": 1, 00:17:04.272 "queue_depth": 128 00:17:04.272 } 00:17:04.272 } 00:17:04.272 ] 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "subsystem": "nbd", 00:17:04.272 "config": [] 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "subsystem": "nvmf", 00:17:04.272 "config": [ 00:17:04.272 { 00:17:04.272 "method": "nvmf_set_config", 00:17:04.272 "params": { 00:17:04.272 "discovery_filter": "match_any", 00:17:04.272 "admin_cmd_passthru": { 00:17:04.272 "identify_ctrlr": false 00:17:04.272 } 00:17:04.272 } 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "method": "nvmf_set_max_subsystems", 00:17:04.272 "params": { 00:17:04.272 "max_subsystems": 1024 00:17:04.272 } 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "method": "nvmf_set_crdt", 00:17:04.272 "params": { 00:17:04.272 "crdt1": 0, 00:17:04.272 "crdt2": 0, 00:17:04.272 "crdt3": 0 00:17:04.272 } 00:17:04.272 } 00:17:04.272 ] 00:17:04.272 }, 00:17:04.272 { 00:17:04.272 "subsystem": "iscsi", 00:17:04.272 "config": [ 00:17:04.272 { 00:17:04.272 "method": "iscsi_set_options", 00:17:04.272 "params": { 00:17:04.272 "node_base": "iqn.2016-06.io.spdk", 00:17:04.272 "max_sessions": 128, 00:17:04.272 "max_connections_per_session": 2, 00:17:04.272 "max_queue_depth": 64, 00:17:04.272 "default_time2wait": 2, 00:17:04.272 "default_time2retain": 20, 00:17:04.272 "first_burst_length": 8192, 00:17:04.272 "immediate_data": true, 00:17:04.272 "allow_duplicated_isid": false, 00:17:04.272 "error_recovery_level": 0, 00:17:04.272 "nop_timeout": 60, 00:17:04.272 "nop_in_interval": 30, 00:17:04.272 "disable_chap": false, 00:17:04.273 "require_chap": false, 00:17:04.273 "mutual_chap": false, 00:17:04.273 "chap_group": 0, 00:17:04.273 "max_large_datain_per_connection": 64, 00:17:04.273 "max_r2t_per_connection": 4, 00:17:04.273 "pdu_pool_size": 36864, 00:17:04.273 "immediate_data_pool_size": 16384, 00:17:04.273 "data_out_pool_size": 2048 00:17:04.273 } 00:17:04.273 } 00:17:04.273 ] 00:17:04.273 } 00:17:04.273 ] 00:17:04.273 }' 00:17:04.273 10:05:53 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 77596 00:17:04.273 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@949 -- # '[' -z 77596 ']' 00:17:04.273 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # kill -0 77596 00:17:04.273 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # uname 00:17:04.273 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:04.273 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 77596 00:17:04.273 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:04.273 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:04.273 killing process with pid 77596 00:17:04.273 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 77596' 00:17:04.273 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # kill 77596 00:17:04.273 10:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # wait 77596 00:17:05.648 [2024-06-10 10:05:54.949041] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:05.648 [2024-06-10 10:05:54.983737] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:05.648 [2024-06-10 10:05:54.983974] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:05.648 [2024-06-10 10:05:54.991702] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:05.648 [2024-06-10 10:05:54.991786] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:05.648 [2024-06-10 10:05:54.991807] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:05.648 [2024-06-10 10:05:54.991844] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:17:05.648 [2024-06-10 10:05:54.992058] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:17:07.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.025 10:05:56 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=77658 00:17:07.025 10:05:56 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 77658 00:17:07.025 10:05:56 ublk.test_save_ublk_config -- common/autotest_common.sh@830 -- # '[' -z 77658 ']' 00:17:07.025 10:05:56 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:17:07.025 10:05:56 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.025 10:05:56 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:07.025 10:05:56 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.025 10:05:56 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:07.025 10:05:56 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:17:07.025 "subsystems": [ 00:17:07.025 { 00:17:07.025 "subsystem": "keyring", 00:17:07.025 "config": [] 00:17:07.025 }, 00:17:07.025 { 00:17:07.025 "subsystem": "iobuf", 00:17:07.025 "config": [ 00:17:07.025 { 00:17:07.025 "method": "iobuf_set_options", 00:17:07.025 "params": { 00:17:07.025 "small_pool_count": 8192, 00:17:07.025 "large_pool_count": 1024, 00:17:07.025 "small_bufsize": 8192, 00:17:07.025 "large_bufsize": 135168 00:17:07.025 } 00:17:07.025 } 00:17:07.025 ] 00:17:07.025 }, 00:17:07.025 { 00:17:07.025 "subsystem": "sock", 00:17:07.025 "config": [ 00:17:07.025 { 00:17:07.025 "method": "sock_set_default_impl", 00:17:07.025 "params": { 00:17:07.025 "impl_name": "posix" 00:17:07.025 } 00:17:07.025 }, 00:17:07.025 { 00:17:07.025 "method": "sock_impl_set_options", 00:17:07.025 "params": { 00:17:07.025 "impl_name": "ssl", 00:17:07.025 "recv_buf_size": 4096, 00:17:07.025 "send_buf_size": 4096, 00:17:07.025 "enable_recv_pipe": true, 00:17:07.025 "enable_quickack": false, 00:17:07.025 "enable_placement_id": 0, 00:17:07.025 "enable_zerocopy_send_server": true, 00:17:07.025 "enable_zerocopy_send_client": false, 00:17:07.025 "zerocopy_threshold": 0, 00:17:07.025 "tls_version": 0, 00:17:07.025 "enable_ktls": false 00:17:07.025 } 00:17:07.025 }, 00:17:07.025 { 00:17:07.025 "method": "sock_impl_set_options", 00:17:07.025 "params": { 00:17:07.025 "impl_name": "posix", 00:17:07.025 "recv_buf_size": 2097152, 00:17:07.025 "send_buf_size": 2097152, 00:17:07.025 "enable_recv_pipe": true, 00:17:07.025 "enable_quickack": false, 00:17:07.025 "enable_placement_id": 0, 00:17:07.025 "enable_zerocopy_send_server": true, 00:17:07.025 "enable_zerocopy_send_client": false, 00:17:07.025 "zerocopy_threshold": 0, 00:17:07.025 "tls_version": 0, 00:17:07.025 "enable_ktls": false 00:17:07.025 } 00:17:07.025 } 00:17:07.025 ] 00:17:07.025 }, 00:17:07.025 { 00:17:07.025 "subsystem": "vmd", 00:17:07.025 "config": [] 00:17:07.025 }, 00:17:07.025 { 00:17:07.025 "subsystem": "accel", 00:17:07.025 "config": [ 00:17:07.025 { 00:17:07.025 "method": "accel_set_options", 00:17:07.025 "params": { 00:17:07.025 "small_cache_size": 128, 00:17:07.025 "large_cache_size": 16, 00:17:07.025 "task_count": 2048, 00:17:07.025 "sequence_count": 2048, 00:17:07.025 "buf_count": 2048 00:17:07.025 } 00:17:07.025 } 00:17:07.025 ] 00:17:07.025 }, 00:17:07.025 { 00:17:07.025 "subsystem": "bdev", 00:17:07.025 "config": [ 00:17:07.025 { 00:17:07.025 "method": "bdev_set_options", 00:17:07.025 "params": { 00:17:07.025 "bdev_io_pool_size": 65535, 00:17:07.025 "bdev_io_cache_size": 256, 00:17:07.025 "bdev_auto_examine": true, 00:17:07.025 "iobuf_small_cache_size": 128, 00:17:07.025 "iobuf_large_cache_size": 16 00:17:07.025 } 00:17:07.025 }, 00:17:07.025 { 00:17:07.025 "method": "bdev_raid_set_options", 00:17:07.025 "params": { 00:17:07.025 "process_window_size_kb": 1024 00:17:07.025 } 00:17:07.025 }, 00:17:07.025 { 00:17:07.025 "method": "bdev_iscsi_set_options", 00:17:07.025 "params": { 00:17:07.025 "timeout_sec": 30 00:17:07.025 } 00:17:07.025 }, 00:17:07.025 { 00:17:07.025 "method": "bdev_nvme_set_options", 00:17:07.025 "params": { 00:17:07.025 "action_on_timeout": "none", 00:17:07.025 "timeout_us": 0, 00:17:07.025 "timeout_admin_us": 0, 00:17:07.025 "keep_alive_timeout_ms": 10000, 00:17:07.025 "arbitration_burst": 0, 00:17:07.025 "low_priority_weight": 0, 00:17:07.025 "medium_priority_weight": 0, 00:17:07.025 "high_priority_weight": 0, 00:17:07.025 "nvme_adminq_poll_period_us": 10000, 00:17:07.025 "nvme_ioq_poll_period_us": 0, 00:17:07.025 "io_queue_requests": 0, 00:17:07.025 "delay_cmd_submit": true, 00:17:07.025 "transport_retry_count": 4, 00:17:07.025 "bdev_retry_count": 3, 00:17:07.025 "transport_ack_timeout": 0, 00:17:07.025 "ctrlr_loss_timeout_sec": 0, 00:17:07.025 "reconnect_delay_sec": 0, 00:17:07.025 "fast_io_fail_timeout_sec": 0, 00:17:07.025 "disable_auto_failback": false, 00:17:07.025 "generate_uuids": false, 00:17:07.025 "transport_tos": 0, 00:17:07.025 "nvme_error_stat": false, 00:17:07.025 "rdma_srq_size": 0, 00:17:07.025 "io_path_stat": false, 00:17:07.025 "allow_accel_sequence": false, 00:17:07.025 "rdma_max_cq_size": 0, 00:17:07.025 "rdma_cm_event_timeout_ms": 0, 00:17:07.025 "dhchap_digests": [ 00:17:07.025 "sha256", 00:17:07.025 "sha384", 00:17:07.025 "sha512" 00:17:07.025 ], 00:17:07.025 "dhchap_dhgroups": [ 00:17:07.025 "null", 00:17:07.025 "ffdhe2048", 00:17:07.026 "ffdhe3072", 00:17:07.026 "ffdhe4096", 00:17:07.026 "ffdhe6144", 00:17:07.026 "ffdhe8192" 00:17:07.026 ] 00:17:07.026 } 00:17:07.026 }, 00:17:07.026 { 00:17:07.026 "method": "bdev_nvme_set_hotplug", 00:17:07.026 "params": { 00:17:07.026 "period_us": 100000, 00:17:07.026 "enable": false 00:17:07.026 } 00:17:07.026 }, 00:17:07.026 { 00:17:07.026 "method": "bdev_malloc_create", 00:17:07.026 "params": { 00:17:07.026 "name": "malloc0", 00:17:07.026 "num_blocks": 8192, 00:17:07.026 "block_size": 4096, 00:17:07.026 "physical_block_size": 4096, 00:17:07.026 "uuid": "ff6be0bc-01d1-4d06-9845-8ed57a0d63de", 00:17:07.026 "optimal_io_boundary": 0 00:17:07.026 } 00:17:07.026 }, 00:17:07.026 { 00:17:07.026 "method": "bdev_wait_for_examine" 00:17:07.026 } 00:17:07.026 ] 00:17:07.026 }, 00:17:07.026 { 00:17:07.026 "subsystem": "scsi", 00:17:07.026 "config": null 00:17:07.026 }, 00:17:07.026 { 00:17:07.026 "subsystem": "scheduler", 00:17:07.026 "config": [ 00:17:07.026 { 00:17:07.026 "method": "framework_set_scheduler", 00:17:07.026 "params": { 00:17:07.026 "name": "static" 00:17:07.026 } 00:17:07.026 } 00:17:07.026 ] 00:17:07.026 }, 00:17:07.026 { 00:17:07.026 "subsystem": "vhost_scsi", 00:17:07.026 "config": [] 00:17:07.026 }, 00:17:07.026 { 00:17:07.026 "subsystem": "vhost_blk", 00:17:07.026 "config": [] 00:17:07.026 }, 00:17:07.026 { 00:17:07.026 "subsystem": "ublk", 00:17:07.026 "config": [ 00:17:07.026 { 00:17:07.026 "method": "ublk_create_target", 00:17:07.026 "params": { 00:17:07.026 "cpumask": "1" 00:17:07.026 } 00:17:07.026 }, 00:17:07.026 { 00:17:07.026 "method": "ublk_start_disk", 00:17:07.026 "params": { 00:17:07.026 "bdev_name": "malloc0", 00:17:07.026 "ublk_id": 0, 00:17:07.026 "num_queues": 1, 00:17:07.026 "queue_depth": 128 00:17:07.026 } 00:17:07.026 } 00:17:07.026 ] 00:17:07.026 }, 00:17:07.026 { 00:17:07.026 "subsystem": "nbd", 00:17:07.026 "config": [] 00:17:07.026 }, 00:17:07.026 { 00:17:07.026 "subsystem": "nvmf", 00:17:07.026 "config": [ 00:17:07.026 { 00:17:07.026 "method": "nvmf_set_config", 00:17:07.026 "params": { 00:17:07.026 "discovery_filter": "match_any", 00:17:07.026 "admin_cmd_passthru": { 00:17:07.026 "identify_ctrlr": false 00:17:07.026 } 00:17:07.026 } 00:17:07.026 }, 00:17:07.026 { 00:17:07.026 "method": "nvmf_set_max_subsystems", 00:17:07.026 "params": { 00:17:07.026 "max_subsystems": 1024 00:17:07.026 } 00:17:07.026 }, 00:17:07.026 { 00:17:07.026 "method": "nvmf_set_crdt", 00:17:07.026 "params": { 00:17:07.026 "crdt1": 0, 00:17:07.026 "crdt2": 0, 00:17:07.026 "crdt3": 0 00:17:07.026 } 00:17:07.026 } 00:17:07.026 ] 00:17:07.026 }, 00:17:07.026 { 00:17:07.026 "subsystem": "iscsi", 00:17:07.026 "config": [ 00:17:07.026 { 00:17:07.026 "method": "iscsi_set_options", 00:17:07.026 "params": { 00:17:07.026 "node_base": "iqn.2016-06.io.spdk", 00:17:07.026 "max_sessions": 128, 00:17:07.026 "max_connections_per_session": 2, 00:17:07.026 "max_queue_depth": 64, 00:17:07.026 "default_time2wait": 2, 00:17:07.026 "default_time2retain": 20, 00:17:07.026 "first_burst_length": 8192, 00:17:07.026 "immediate_data": true, 00:17:07.026 "allow_duplicated_isid": false, 00:17:07.026 "error_recovery_level": 0, 00:17:07.026 "nop_timeout": 60, 00:17:07.026 "nop_in_interval": 30, 00:17:07.026 "disable_chap": false, 00:17:07.026 "require_chap": false, 00:17:07.026 "mutual_chap": false, 00:17:07.026 "chap_group": 0, 00:17:07.026 "max_large_datain_per_connection": 64, 00:17:07.026 "max_r2t_per_connection": 4, 00:17:07.026 "pdu_pool_size": 36864, 00:17:07.026 "immediate_data_pool_size": 16384, 00:17:07.026 "data_out_pool_size": 2048 00:17:07.026 } 00:17:07.026 } 00:17:07.026 ] 00:17:07.026 } 00:17:07.026 ] 00:17:07.026 }' 00:17:07.026 10:05:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:07.026 [2024-06-10 10:05:56.402566] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:17:07.026 [2024-06-10 10:05:56.402763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77658 ] 00:17:07.283 [2024-06-10 10:05:56.573825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.541 [2024-06-10 10:05:56.843423] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.474 [2024-06-10 10:05:57.699684] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:08.474 [2024-06-10 10:05:57.700780] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:08.474 [2024-06-10 10:05:57.707860] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:08.474 [2024-06-10 10:05:57.708003] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:08.474 [2024-06-10 10:05:57.708024] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:08.474 [2024-06-10 10:05:57.708034] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:08.474 [2024-06-10 10:05:57.716744] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:08.474 [2024-06-10 10:05:57.716780] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:08.474 [2024-06-10 10:05:57.723704] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:08.474 [2024-06-10 10:05:57.723854] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:08.474 [2024-06-10 10:05:57.738718] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@863 -- # return 0 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 77658 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@949 -- # '[' -z 77658 ']' 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # kill -0 77658 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # uname 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 77658 00:17:08.474 killing process with pid 77658 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 77658' 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # kill 77658 00:17:08.474 10:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # wait 77658 00:17:10.373 [2024-06-10 10:05:59.461533] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:10.373 [2024-06-10 10:05:59.502703] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:10.373 [2024-06-10 10:05:59.502931] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:10.373 [2024-06-10 10:05:59.511709] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:10.373 [2024-06-10 10:05:59.511797] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:10.373 [2024-06-10 10:05:59.511812] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:10.373 [2024-06-10 10:05:59.511847] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:17:10.373 [2024-06-10 10:05:59.512044] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:17:11.308 10:06:00 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:17:11.308 00:17:11.308 real 0m9.004s 00:17:11.308 user 0m7.786s 00:17:11.308 sys 0m2.170s 00:17:11.308 10:06:00 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:11.308 ************************************ 00:17:11.308 END TEST test_save_ublk_config 00:17:11.308 ************************************ 00:17:11.308 10:06:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:11.566 10:06:00 ublk -- ublk/ublk.sh@139 -- # spdk_pid=77736 00:17:11.566 10:06:00 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:11.566 10:06:00 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:11.566 10:06:00 ublk -- ublk/ublk.sh@141 -- # waitforlisten 77736 00:17:11.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.566 10:06:00 ublk -- common/autotest_common.sh@830 -- # '[' -z 77736 ']' 00:17:11.566 10:06:00 ublk -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.566 10:06:00 ublk -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:11.566 10:06:00 ublk -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.566 10:06:00 ublk -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:11.566 10:06:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.566 [2024-06-10 10:06:00.924467] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:17:11.566 [2024-06-10 10:06:00.924833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77736 ] 00:17:11.824 [2024-06-10 10:06:01.089872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:11.824 [2024-06-10 10:06:01.280511] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.824 [2024-06-10 10:06:01.280525] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.758 10:06:01 ublk -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:12.758 10:06:01 ublk -- common/autotest_common.sh@863 -- # return 0 00:17:12.758 10:06:01 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:17:12.758 10:06:01 ublk -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:17:12.758 10:06:01 ublk -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:12.758 10:06:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.758 ************************************ 00:17:12.758 START TEST test_create_ublk 00:17:12.758 ************************************ 00:17:12.758 10:06:01 ublk.test_create_ublk -- common/autotest_common.sh@1124 -- # test_create_ublk 00:17:12.758 10:06:01 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:17:12.758 10:06:02 ublk.test_create_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:12.758 10:06:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.758 [2024-06-10 10:06:02.008683] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:12.758 [2024-06-10 10:06:02.011119] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:12.758 10:06:02 ublk.test_create_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:12.758 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:17:12.758 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:17:12.758 10:06:02 ublk.test_create_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:12.758 10:06:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.017 10:06:02 ublk.test_create_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:13.017 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:17:13.017 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:13.017 10:06:02 ublk.test_create_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:13.017 10:06:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.017 [2024-06-10 10:06:02.288872] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:13.018 [2024-06-10 10:06:02.289368] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:13.018 [2024-06-10 10:06:02.289392] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:13.018 [2024-06-10 10:06:02.289403] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:13.018 [2024-06-10 10:06:02.296974] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:13.018 [2024-06-10 10:06:02.297012] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:13.018 [2024-06-10 10:06:02.304698] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:13.018 [2024-06-10 10:06:02.318975] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:13.018 [2024-06-10 10:06:02.343690] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:13.018 10:06:02 ublk.test_create_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:13.018 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:17:13.018 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:17:13.018 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:17:13.018 10:06:02 ublk.test_create_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:13.018 10:06:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.018 10:06:02 ublk.test_create_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:13.018 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:17:13.018 { 00:17:13.018 "ublk_device": "/dev/ublkb0", 00:17:13.018 "id": 0, 00:17:13.018 "queue_depth": 512, 00:17:13.018 "num_queues": 4, 00:17:13.018 "bdev_name": "Malloc0" 00:17:13.018 } 00:17:13.018 ]' 00:17:13.018 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:17:13.018 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:13.018 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:17:13.018 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:17:13.018 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:17:13.018 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:17:13.018 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:17:13.276 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:17:13.276 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:17:13.276 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:13.276 10:06:02 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:17:13.276 10:06:02 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:17:13.276 10:06:02 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:17:13.276 10:06:02 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:17:13.276 10:06:02 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:17:13.276 10:06:02 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:17:13.276 10:06:02 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:17:13.276 10:06:02 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:17:13.276 10:06:02 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:17:13.276 10:06:02 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:13.276 10:06:02 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:13.276 10:06:02 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:17:13.276 fio: verification read phase will never start because write phase uses all of runtime 00:17:13.276 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:17:13.276 fio-3.35 00:17:13.276 Starting 1 process 00:17:25.522 00:17:25.522 fio_test: (groupid=0, jobs=1): err= 0: pid=77786: Mon Jun 10 10:06:12 2024 00:17:25.522 write: IOPS=10.6k, BW=41.5MiB/s (43.5MB/s)(415MiB/10001msec); 0 zone resets 00:17:25.522 clat (usec): min=60, max=4837, avg=92.41, stdev=139.21 00:17:25.522 lat (usec): min=60, max=4840, avg=93.30, stdev=139.23 00:17:25.522 clat percentiles (usec): 00:17:25.522 | 1.00th=[ 67], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 79], 00:17:25.522 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 84], 00:17:25.522 | 70.00th=[ 86], 80.00th=[ 90], 90.00th=[ 96], 95.00th=[ 103], 00:17:25.522 | 99.00th=[ 125], 99.50th=[ 141], 99.90th=[ 2835], 99.95th=[ 3261], 00:17:25.522 | 99.99th=[ 3720] 00:17:25.522 bw ( KiB/s): min=41000, max=46504, per=100.00%, avg=42604.63, stdev=1201.65, samples=19 00:17:25.522 iops : min=10250, max=11626, avg=10651.16, stdev=300.41, samples=19 00:17:25.522 lat (usec) : 100=93.34%, 250=6.26%, 500=0.01%, 750=0.02%, 1000=0.03% 00:17:25.522 lat (msec) : 2=0.13%, 4=0.21%, 10=0.01% 00:17:25.522 cpu : usr=3.14%, sys=7.74%, ctx=106277, majf=0, minf=797 00:17:25.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:25.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.522 issued rwts: total=0,106277,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:25.522 00:17:25.522 Run status group 0 (all jobs): 00:17:25.522 WRITE: bw=41.5MiB/s (43.5MB/s), 41.5MiB/s-41.5MiB/s (43.5MB/s-43.5MB/s), io=415MiB (435MB), run=10001-10001msec 00:17:25.522 00:17:25.522 Disk stats (read/write): 00:17:25.522 ublkb0: ios=0/105311, merge=0/0, ticks=0/8873, in_queue=8874, util=99.03% 00:17:25.522 10:06:12 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.522 [2024-06-10 10:06:12.848339] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:25.522 [2024-06-10 10:06:12.879222] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:25.522 [2024-06-10 10:06:12.880734] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:25.522 [2024-06-10 10:06:12.886720] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:25.522 [2024-06-10 10:06:12.887089] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:25.522 [2024-06-10 10:06:12.887116] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.522 10:06:12 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@649 -- # local es=0 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # rpc_cmd ublk_stop_disk 0 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.522 [2024-06-10 10:06:12.902810] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:25.522 request: 00:17:25.522 { 00:17:25.522 "ublk_id": 0, 00:17:25.522 "method": "ublk_stop_disk", 00:17:25.522 "req_id": 1 00:17:25.522 } 00:17:25.522 Got JSON-RPC error response 00:17:25.522 response: 00:17:25.522 { 00:17:25.522 "code": -19, 00:17:25.522 "message": "No such device" 00:17:25.522 } 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # es=1 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:25.522 10:06:12 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.522 [2024-06-10 10:06:12.918804] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:17:25.522 [2024-06-10 10:06:12.926665] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:17:25.522 [2024-06-10 10:06:12.926736] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.522 10:06:12 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.522 10:06:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.522 10:06:13 ublk.test_create_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.522 10:06:13 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:25.522 10:06:13 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:25.522 10:06:13 ublk.test_create_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.522 10:06:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.522 10:06:13 ublk.test_create_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.522 10:06:13 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:25.522 10:06:13 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:25.522 10:06:13 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:25.522 10:06:13 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:25.522 10:06:13 ublk.test_create_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.522 10:06:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.522 10:06:13 ublk.test_create_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.522 10:06:13 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:25.522 10:06:13 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:25.522 ************************************ 00:17:25.522 END TEST test_create_ublk 00:17:25.522 ************************************ 00:17:25.522 10:06:13 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:25.522 00:17:25.522 real 0m11.357s 00:17:25.522 user 0m0.749s 00:17:25.522 sys 0m0.858s 00:17:25.522 10:06:13 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:25.522 10:06:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.522 10:06:13 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:25.522 10:06:13 ublk -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:17:25.522 10:06:13 ublk -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:25.522 10:06:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.522 ************************************ 00:17:25.522 START TEST test_create_multi_ublk 00:17:25.522 ************************************ 00:17:25.522 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@1124 -- # test_create_multi_ublk 00:17:25.522 10:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.523 [2024-06-10 10:06:13.410670] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:25.523 [2024-06-10 10:06:13.413338] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.523 [2024-06-10 10:06:13.677874] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:25.523 [2024-06-10 10:06:13.678404] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:25.523 [2024-06-10 10:06:13.678430] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:25.523 [2024-06-10 10:06:13.678444] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:25.523 [2024-06-10 10:06:13.686868] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:25.523 [2024-06-10 10:06:13.686909] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:25.523 [2024-06-10 10:06:13.693685] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:25.523 [2024-06-10 10:06:13.694453] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:25.523 [2024-06-10 10:06:13.704765] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.523 [2024-06-10 10:06:13.968889] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:25.523 [2024-06-10 10:06:13.969412] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:25.523 [2024-06-10 10:06:13.969442] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:25.523 [2024-06-10 10:06:13.969455] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:25.523 [2024-06-10 10:06:13.977886] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:25.523 [2024-06-10 10:06:13.977927] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:25.523 [2024-06-10 10:06:13.984723] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:25.523 [2024-06-10 10:06:13.985516] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:25.523 [2024-06-10 10:06:13.990926] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.523 10:06:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.523 [2024-06-10 10:06:14.247898] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:25.523 [2024-06-10 10:06:14.248444] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:25.523 [2024-06-10 10:06:14.248482] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:25.523 [2024-06-10 10:06:14.248511] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:25.523 [2024-06-10 10:06:14.256140] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:25.523 [2024-06-10 10:06:14.256221] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:25.523 [2024-06-10 10:06:14.263722] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:25.523 [2024-06-10 10:06:14.264715] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:25.523 [2024-06-10 10:06:14.276677] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.523 [2024-06-10 10:06:14.534883] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:25.523 [2024-06-10 10:06:14.535400] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:25.523 [2024-06-10 10:06:14.535430] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:25.523 [2024-06-10 10:06:14.535442] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:25.523 [2024-06-10 10:06:14.542744] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:25.523 [2024-06-10 10:06:14.542790] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:25.523 [2024-06-10 10:06:14.550734] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:25.523 [2024-06-10 10:06:14.551541] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:25.523 [2024-06-10 10:06:14.555018] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:25.523 { 00:17:25.523 "ublk_device": "/dev/ublkb0", 00:17:25.523 "id": 0, 00:17:25.523 "queue_depth": 512, 00:17:25.523 "num_queues": 4, 00:17:25.523 "bdev_name": "Malloc0" 00:17:25.523 }, 00:17:25.523 { 00:17:25.523 "ublk_device": "/dev/ublkb1", 00:17:25.523 "id": 1, 00:17:25.523 "queue_depth": 512, 00:17:25.523 "num_queues": 4, 00:17:25.523 "bdev_name": "Malloc1" 00:17:25.523 }, 00:17:25.523 { 00:17:25.523 "ublk_device": "/dev/ublkb2", 00:17:25.523 "id": 2, 00:17:25.523 "queue_depth": 512, 00:17:25.523 "num_queues": 4, 00:17:25.523 "bdev_name": "Malloc2" 00:17:25.523 }, 00:17:25.523 { 00:17:25.523 "ublk_device": "/dev/ublkb3", 00:17:25.523 "id": 3, 00:17:25.523 "queue_depth": 512, 00:17:25.523 "num_queues": 4, 00:17:25.523 "bdev_name": "Malloc3" 00:17:25.523 } 00:17:25.523 ]' 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:25.523 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:25.524 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:25.524 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:25.524 10:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:25.524 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:25.524 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:25.782 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:25.782 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:25.782 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:25.782 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:25.782 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:25.782 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:25.782 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:25.782 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:25.782 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:25.782 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:25.782 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:26.040 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:26.040 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:26.040 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:26.040 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.040 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:26.040 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:26.040 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:26.040 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:26.040 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:26.041 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:26.041 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:26.299 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:26.299 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:26.299 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:26.299 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:26.299 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:26.299 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.299 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:26.299 10:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.299 10:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.299 [2024-06-10 10:06:15.666911] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:26.299 [2024-06-10 10:06:15.702808] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:26.299 [2024-06-10 10:06:15.704310] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:26.300 [2024-06-10 10:06:15.710715] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:26.300 [2024-06-10 10:06:15.711103] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:26.300 [2024-06-10 10:06:15.711130] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:26.300 10:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.300 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.300 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:26.300 10:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.300 10:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.300 [2024-06-10 10:06:15.726830] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:26.300 [2024-06-10 10:06:15.766719] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:26.300 [2024-06-10 10:06:15.768377] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:26.300 [2024-06-10 10:06:15.782769] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:26.300 [2024-06-10 10:06:15.783233] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:26.300 [2024-06-10 10:06:15.783275] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:26.300 10:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.300 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.300 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:26.300 10:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.300 10:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.300 [2024-06-10 10:06:15.798986] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:26.558 [2024-06-10 10:06:15.838762] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:26.558 [2024-06-10 10:06:15.840128] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:26.558 [2024-06-10 10:06:15.846714] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:26.558 [2024-06-10 10:06:15.847093] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:26.558 [2024-06-10 10:06:15.847125] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:26.558 10:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.558 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.558 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:26.558 10:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.558 10:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.558 [2024-06-10 10:06:15.857873] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:26.558 [2024-06-10 10:06:15.894744] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:26.558 [2024-06-10 10:06:15.896704] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:26.558 [2024-06-10 10:06:15.908864] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:26.558 [2024-06-10 10:06:15.909417] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:26.558 [2024-06-10 10:06:15.909458] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:26.558 10:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.558 10:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:26.816 [2024-06-10 10:06:16.193838] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:17:26.816 [2024-06-10 10:06:16.204189] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:17:26.816 [2024-06-10 10:06:16.204281] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:26.816 10:06:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:26.816 10:06:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.816 10:06:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:26.816 10:06:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.816 10:06:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:27.074 10:06:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.074 10:06:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:27.074 10:06:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:27.074 10:06:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.074 10:06:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:27.333 10:06:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.333 10:06:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:27.333 10:06:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:27.333 10:06:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.333 10:06:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:27.899 10:06:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.899 10:06:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:27.899 10:06:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:27.899 10:06:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.899 10:06:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:28.157 ************************************ 00:17:28.157 END TEST test_create_multi_ublk 00:17:28.157 ************************************ 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:28.157 00:17:28.157 real 0m4.193s 00:17:28.157 user 0m1.402s 00:17:28.157 sys 0m0.152s 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:28.157 10:06:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:28.157 10:06:17 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:28.157 10:06:17 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:28.157 10:06:17 ublk -- ublk/ublk.sh@130 -- # killprocess 77736 00:17:28.157 10:06:17 ublk -- common/autotest_common.sh@949 -- # '[' -z 77736 ']' 00:17:28.157 10:06:17 ublk -- common/autotest_common.sh@953 -- # kill -0 77736 00:17:28.157 10:06:17 ublk -- common/autotest_common.sh@954 -- # uname 00:17:28.157 10:06:17 ublk -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:28.157 10:06:17 ublk -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 77736 00:17:28.157 killing process with pid 77736 00:17:28.157 10:06:17 ublk -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:28.157 10:06:17 ublk -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:28.157 10:06:17 ublk -- common/autotest_common.sh@967 -- # echo 'killing process with pid 77736' 00:17:28.157 10:06:17 ublk -- common/autotest_common.sh@968 -- # kill 77736 00:17:28.157 10:06:17 ublk -- common/autotest_common.sh@973 -- # wait 77736 00:17:29.534 [2024-06-10 10:06:18.676447] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:17:29.534 [2024-06-10 10:06:18.676519] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:17:30.468 ************************************ 00:17:30.468 END TEST ublk 00:17:30.468 ************************************ 00:17:30.468 00:17:30.468 real 0m28.138s 00:17:30.468 user 0m42.432s 00:17:30.468 sys 0m8.184s 00:17:30.468 10:06:19 ublk -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:30.468 10:06:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:30.468 10:06:19 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:30.468 10:06:19 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:17:30.468 10:06:19 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:30.468 10:06:19 -- common/autotest_common.sh@10 -- # set +x 00:17:30.468 ************************************ 00:17:30.468 START TEST ublk_recovery 00:17:30.468 ************************************ 00:17:30.468 10:06:19 ublk_recovery -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:30.468 * Looking for test storage... 00:17:30.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:30.469 10:06:19 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:30.469 10:06:19 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:30.469 10:06:19 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:30.469 10:06:19 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:30.469 10:06:19 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:30.469 10:06:19 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:30.469 10:06:19 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:30.469 10:06:19 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:30.469 10:06:19 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:30.469 10:06:19 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:30.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.469 10:06:19 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=78120 00:17:30.469 10:06:19 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:30.469 10:06:19 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:30.469 10:06:19 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 78120 00:17:30.469 10:06:19 ublk_recovery -- common/autotest_common.sh@830 -- # '[' -z 78120 ']' 00:17:30.469 10:06:19 ublk_recovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.469 10:06:19 ublk_recovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:30.469 10:06:19 ublk_recovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.469 10:06:19 ublk_recovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:30.469 10:06:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.727 [2024-06-10 10:06:20.041001] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:17:30.727 [2024-06-10 10:06:20.041151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78120 ] 00:17:30.727 [2024-06-10 10:06:20.205033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:30.985 [2024-06-10 10:06:20.432065] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.985 [2024-06-10 10:06:20.432072] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.919 10:06:21 ublk_recovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:31.919 10:06:21 ublk_recovery -- common/autotest_common.sh@863 -- # return 0 00:17:31.919 10:06:21 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:31.919 10:06:21 ublk_recovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.919 10:06:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.919 [2024-06-10 10:06:21.156674] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:31.919 [2024-06-10 10:06:21.159078] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:31.919 10:06:21 ublk_recovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.919 10:06:21 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:31.919 10:06:21 ublk_recovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.919 10:06:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.919 malloc0 00:17:31.919 10:06:21 ublk_recovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.919 10:06:21 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:31.919 10:06:21 ublk_recovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.919 10:06:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.919 [2024-06-10 10:06:21.292851] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:31.919 [2024-06-10 10:06:21.292991] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:31.919 [2024-06-10 10:06:21.293015] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:31.919 [2024-06-10 10:06:21.293025] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:31.919 [2024-06-10 10:06:21.301770] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:31.919 [2024-06-10 10:06:21.301806] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:31.919 [2024-06-10 10:06:21.308689] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:31.919 [2024-06-10 10:06:21.308887] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:31.919 [2024-06-10 10:06:21.331723] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:31.919 1 00:17:31.919 10:06:21 ublk_recovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.919 10:06:21 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:32.855 10:06:22 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=78155 00:17:32.855 10:06:22 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:32.855 10:06:22 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:33.116 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:33.116 fio-3.35 00:17:33.116 Starting 1 process 00:17:38.383 10:06:27 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 78120 00:17:38.383 10:06:27 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:43.645 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 78120 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:43.645 10:06:32 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=78265 00:17:43.645 10:06:32 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:43.645 10:06:32 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:43.645 10:06:32 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 78265 00:17:43.645 10:06:32 ublk_recovery -- common/autotest_common.sh@830 -- # '[' -z 78265 ']' 00:17:43.645 10:06:32 ublk_recovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.645 10:06:32 ublk_recovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:43.645 10:06:32 ublk_recovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.645 10:06:32 ublk_recovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:43.645 10:06:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.645 [2024-06-10 10:06:32.450678] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:17:43.645 [2024-06-10 10:06:32.451078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78265 ] 00:17:43.645 [2024-06-10 10:06:32.609426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:43.645 [2024-06-10 10:06:32.798381] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.645 [2024-06-10 10:06:32.798388] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.209 10:06:33 ublk_recovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:44.209 10:06:33 ublk_recovery -- common/autotest_common.sh@863 -- # return 0 00:17:44.209 10:06:33 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:44.209 10:06:33 ublk_recovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.209 10:06:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.209 [2024-06-10 10:06:33.533678] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:44.209 [2024-06-10 10:06:33.536068] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:44.209 10:06:33 ublk_recovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.209 10:06:33 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:44.209 10:06:33 ublk_recovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.209 10:06:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.209 malloc0 00:17:44.209 10:06:33 ublk_recovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.209 10:06:33 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:44.209 10:06:33 ublk_recovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.209 10:06:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.209 [2024-06-10 10:06:33.668879] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:44.209 [2024-06-10 10:06:33.668958] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:44.209 [2024-06-10 10:06:33.668983] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:44.210 [2024-06-10 10:06:33.678752] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:44.210 [2024-06-10 10:06:33.678812] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:44.210 1 00:17:44.210 [2024-06-10 10:06:33.678938] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:44.210 10:06:33 ublk_recovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.210 10:06:33 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 78155 00:18:10.735 [2024-06-10 10:06:57.845718] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:18:10.735 [2024-06-10 10:06:57.852702] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:18:10.735 [2024-06-10 10:06:57.861152] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:18:10.735 [2024-06-10 10:06:57.861213] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:37.263 00:18:37.263 fio_test: (groupid=0, jobs=1): err= 0: pid=78158: Mon Jun 10 10:07:22 2024 00:18:37.263 read: IOPS=9473, BW=37.0MiB/s (38.8MB/s)(2220MiB/60002msec) 00:18:37.263 slat (nsec): min=1971, max=965320, avg=6671.41, stdev=3009.93 00:18:37.263 clat (usec): min=1015, max=30524k, avg=6159.32, stdev=297452.69 00:18:37.263 lat (usec): min=1022, max=30524k, avg=6165.99, stdev=297452.69 00:18:37.263 clat percentiles (usec): 00:18:37.263 | 1.00th=[ 2606], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2933], 00:18:37.263 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:18:37.263 | 70.00th=[ 3130], 80.00th=[ 3228], 90.00th=[ 3884], 95.00th=[ 4621], 00:18:37.263 | 99.00th=[ 7177], 99.50th=[ 7832], 99.90th=[13566], 99.95th=[13960], 00:18:37.263 | 99.99th=[14484] 00:18:37.263 bw ( KiB/s): min=15400, max=82984, per=100.00%, avg=75858.85, stdev=12625.01, samples=59 00:18:37.263 iops : min= 3850, max=20746, avg=18964.71, stdev=3156.25, samples=59 00:18:37.263 write: IOPS=9459, BW=37.0MiB/s (38.7MB/s)(2217MiB/60002msec); 0 zone resets 00:18:37.263 slat (usec): min=2, max=210, avg= 6.68, stdev= 2.76 00:18:37.263 clat (usec): min=1039, max=30524k, avg=7347.55, stdev=348453.10 00:18:37.263 lat (usec): min=1056, max=30524k, avg=7354.23, stdev=348453.10 00:18:37.263 clat percentiles (msec): 00:18:37.264 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4], 00:18:37.264 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:18:37.264 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:18:37.264 | 99.00th=[ 8], 99.50th=[ 8], 99.90th=[ 14], 99.95th=[ 15], 00:18:37.264 | 99.99th=[17113] 00:18:37.264 bw ( KiB/s): min=15368, max=82712, per=100.00%, avg=75743.73, stdev=12479.74, samples=59 00:18:37.264 iops : min= 3842, max=20678, avg=18935.93, stdev=3119.94, samples=59 00:18:37.264 lat (msec) : 2=0.06%, 4=91.02%, 10=8.71%, 20=0.21%, >=2000=0.01% 00:18:37.264 cpu : usr=5.45%, sys=11.80%, ctx=37771, majf=0, minf=13 00:18:37.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:37.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:37.264 issued rwts: total=568417,567586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:37.264 00:18:37.264 Run status group 0 (all jobs): 00:18:37.264 READ: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=2220MiB (2328MB), run=60002-60002msec 00:18:37.264 WRITE: bw=37.0MiB/s (38.7MB/s), 37.0MiB/s-37.0MiB/s (38.7MB/s-38.7MB/s), io=2217MiB (2325MB), run=60002-60002msec 00:18:37.264 00:18:37.264 Disk stats (read/write): 00:18:37.264 ublkb1: ios=566233/565333, merge=0/0, ticks=3444070/4047186, in_queue=7491256, util=99.93% 00:18:37.264 10:07:22 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:37.264 10:07:22 ublk_recovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.264 10:07:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.264 [2024-06-10 10:07:22.604708] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:37.264 [2024-06-10 10:07:22.653801] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:37.264 [2024-06-10 10:07:22.654376] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:37.264 [2024-06-10 10:07:22.660711] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:37.264 [2024-06-10 10:07:22.660867] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:37.264 [2024-06-10 10:07:22.660884] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:37.264 10:07:22 ublk_recovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.264 10:07:22 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:37.264 10:07:22 ublk_recovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.264 10:07:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.264 [2024-06-10 10:07:22.675863] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:37.264 [2024-06-10 10:07:22.683695] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:37.264 [2024-06-10 10:07:22.683783] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:37.264 10:07:22 ublk_recovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.264 10:07:22 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:37.264 10:07:22 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:37.264 10:07:22 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 78265 00:18:37.264 10:07:22 ublk_recovery -- common/autotest_common.sh@949 -- # '[' -z 78265 ']' 00:18:37.264 10:07:22 ublk_recovery -- common/autotest_common.sh@953 -- # kill -0 78265 00:18:37.264 10:07:22 ublk_recovery -- common/autotest_common.sh@954 -- # uname 00:18:37.264 10:07:22 ublk_recovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:37.264 10:07:22 ublk_recovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 78265 00:18:37.264 10:07:22 ublk_recovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:37.264 killing process with pid 78265 00:18:37.264 10:07:22 ublk_recovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:37.264 10:07:22 ublk_recovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 78265' 00:18:37.264 10:07:22 ublk_recovery -- common/autotest_common.sh@968 -- # kill 78265 00:18:37.264 10:07:22 ublk_recovery -- common/autotest_common.sh@973 -- # wait 78265 00:18:37.264 [2024-06-10 10:07:23.687887] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:37.264 [2024-06-10 10:07:23.687966] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:37.264 ************************************ 00:18:37.264 END TEST ublk_recovery 00:18:37.264 ************************************ 00:18:37.264 00:18:37.264 real 1m5.115s 00:18:37.264 user 1m51.334s 00:18:37.264 sys 0m18.448s 00:18:37.264 10:07:24 ublk_recovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:37.264 10:07:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.264 10:07:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:37.264 10:07:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:37.264 10:07:25 -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:37.264 10:07:25 -- common/autotest_common.sh@10 -- # set +x 00:18:37.264 10:07:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:37.264 10:07:25 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:18:37.264 10:07:25 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:18:37.264 10:07:25 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:18:37.264 10:07:25 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:18:37.264 10:07:25 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:18:37.264 10:07:25 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:18:37.264 10:07:25 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:18:37.264 10:07:25 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:18:37.264 10:07:25 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:18:37.264 10:07:25 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:37.264 10:07:25 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:18:37.264 10:07:25 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:37.264 10:07:25 -- common/autotest_common.sh@10 -- # set +x 00:18:37.264 ************************************ 00:18:37.264 START TEST ftl 00:18:37.264 ************************************ 00:18:37.264 10:07:25 ftl -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:37.264 * Looking for test storage... 00:18:37.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:37.264 10:07:25 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:37.264 10:07:25 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:37.264 10:07:25 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:37.264 10:07:25 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:37.264 10:07:25 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:37.264 10:07:25 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:37.264 10:07:25 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:37.264 10:07:25 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:37.264 10:07:25 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:37.264 10:07:25 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:37.264 10:07:25 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:37.264 10:07:25 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:37.264 10:07:25 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:37.264 10:07:25 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:37.264 10:07:25 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:37.264 10:07:25 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:37.264 10:07:25 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:37.264 10:07:25 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:37.264 10:07:25 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:37.264 10:07:25 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:37.264 10:07:25 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:37.264 10:07:25 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:37.264 10:07:25 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:37.264 10:07:25 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:37.264 10:07:25 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:37.264 10:07:25 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:37.264 10:07:25 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:37.264 10:07:25 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:37.264 10:07:25 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:37.264 10:07:25 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:37.264 10:07:25 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:37.264 10:07:25 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:37.264 10:07:25 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:37.264 10:07:25 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:37.264 10:07:25 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:37.264 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:37.264 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:37.264 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:37.264 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:37.264 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:37.264 10:07:25 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=79036 00:18:37.264 10:07:25 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:37.264 10:07:25 ftl -- ftl/ftl.sh@38 -- # waitforlisten 79036 00:18:37.264 10:07:25 ftl -- common/autotest_common.sh@830 -- # '[' -z 79036 ']' 00:18:37.264 10:07:25 ftl -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.264 10:07:25 ftl -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:37.264 10:07:25 ftl -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.264 10:07:25 ftl -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:37.264 10:07:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:37.264 [2024-06-10 10:07:25.768110] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:18:37.264 [2024-06-10 10:07:25.768779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79036 ] 00:18:37.264 [2024-06-10 10:07:25.955551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.264 [2024-06-10 10:07:26.193174] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.264 10:07:26 ftl -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:37.265 10:07:26 ftl -- common/autotest_common.sh@863 -- # return 0 00:18:37.265 10:07:26 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:37.523 10:07:26 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:38.457 10:07:27 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:38.457 10:07:27 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:39.025 10:07:28 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:39.025 10:07:28 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:39.025 10:07:28 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:39.307 10:07:28 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:39.307 10:07:28 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:39.307 10:07:28 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:39.307 10:07:28 ftl -- ftl/ftl.sh@50 -- # break 00:18:39.307 10:07:28 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:39.307 10:07:28 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:39.307 10:07:28 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:39.307 10:07:28 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:39.565 10:07:29 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:39.565 10:07:29 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:39.565 10:07:29 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:39.565 10:07:29 ftl -- ftl/ftl.sh@63 -- # break 00:18:39.565 10:07:29 ftl -- ftl/ftl.sh@66 -- # killprocess 79036 00:18:39.565 10:07:29 ftl -- common/autotest_common.sh@949 -- # '[' -z 79036 ']' 00:18:39.565 10:07:29 ftl -- common/autotest_common.sh@953 -- # kill -0 79036 00:18:39.565 10:07:29 ftl -- common/autotest_common.sh@954 -- # uname 00:18:39.565 10:07:29 ftl -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:39.565 10:07:29 ftl -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 79036 00:18:39.823 killing process with pid 79036 00:18:39.823 10:07:29 ftl -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:39.823 10:07:29 ftl -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:39.823 10:07:29 ftl -- common/autotest_common.sh@967 -- # echo 'killing process with pid 79036' 00:18:39.823 10:07:29 ftl -- common/autotest_common.sh@968 -- # kill 79036 00:18:39.823 10:07:29 ftl -- common/autotest_common.sh@973 -- # wait 79036 00:18:41.722 10:07:31 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:41.722 10:07:31 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:41.722 10:07:31 ftl -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:18:41.722 10:07:31 ftl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:41.722 10:07:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:41.722 ************************************ 00:18:41.722 START TEST ftl_fio_basic 00:18:41.722 ************************************ 00:18:41.722 10:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:41.981 * Looking for test storage... 00:18:41.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=79174 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 79174 00:18:41.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@830 -- # '[' -z 79174 ']' 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:41.981 10:07:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:41.981 [2024-06-10 10:07:31.442785] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:18:41.981 [2024-06-10 10:07:31.442953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79174 ] 00:18:42.239 [2024-06-10 10:07:31.617247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:42.497 [2024-06-10 10:07:31.915452] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.497 [2024-06-10 10:07:31.915589] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.497 [2024-06-10 10:07:31.915590] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.438 10:07:32 ftl.ftl_fio_basic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:43.439 10:07:32 ftl.ftl_fio_basic -- common/autotest_common.sh@863 -- # return 0 00:18:43.439 10:07:32 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:43.439 10:07:32 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:43.439 10:07:32 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:43.439 10:07:32 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:43.439 10:07:32 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:43.439 10:07:32 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:43.696 10:07:32 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:43.696 10:07:32 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:43.696 10:07:32 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:43.696 10:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local bdev_name=nvme0n1 00:18:43.696 10:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_info 00:18:43.696 10:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bs 00:18:43.696 10:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local nb 00:18:43.696 10:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:43.955 10:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:18:43.955 { 00:18:43.955 "name": "nvme0n1", 00:18:43.955 "aliases": [ 00:18:43.955 "65c1b1f0-ec9a-4c2d-8886-5cada03d9dde" 00:18:43.955 ], 00:18:43.955 "product_name": "NVMe disk", 00:18:43.955 "block_size": 4096, 00:18:43.955 "num_blocks": 1310720, 00:18:43.955 "uuid": "65c1b1f0-ec9a-4c2d-8886-5cada03d9dde", 00:18:43.955 "assigned_rate_limits": { 00:18:43.955 "rw_ios_per_sec": 0, 00:18:43.955 "rw_mbytes_per_sec": 0, 00:18:43.955 "r_mbytes_per_sec": 0, 00:18:43.955 "w_mbytes_per_sec": 0 00:18:43.955 }, 00:18:43.955 "claimed": false, 00:18:43.955 "zoned": false, 00:18:43.955 "supported_io_types": { 00:18:43.955 "read": true, 00:18:43.955 "write": true, 00:18:43.955 "unmap": true, 00:18:43.955 "write_zeroes": true, 00:18:43.955 "flush": true, 00:18:43.955 "reset": true, 00:18:43.955 "compare": true, 00:18:43.955 "compare_and_write": false, 00:18:43.955 "abort": true, 00:18:43.955 "nvme_admin": true, 00:18:43.955 "nvme_io": true 00:18:43.955 }, 00:18:43.955 "driver_specific": { 00:18:43.955 "nvme": [ 00:18:43.955 { 00:18:43.955 "pci_address": "0000:00:11.0", 00:18:43.955 "trid": { 00:18:43.955 "trtype": "PCIe", 00:18:43.955 "traddr": "0000:00:11.0" 00:18:43.955 }, 00:18:43.955 "ctrlr_data": { 00:18:43.955 "cntlid": 0, 00:18:43.955 "vendor_id": "0x1b36", 00:18:43.955 "model_number": "QEMU NVMe Ctrl", 00:18:43.955 "serial_number": "12341", 00:18:43.955 "firmware_revision": "8.0.0", 00:18:43.955 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:43.955 "oacs": { 00:18:43.955 "security": 0, 00:18:43.955 "format": 1, 00:18:43.955 "firmware": 0, 00:18:43.955 "ns_manage": 1 00:18:43.955 }, 00:18:43.955 "multi_ctrlr": false, 00:18:43.955 "ana_reporting": false 00:18:43.955 }, 00:18:43.955 "vs": { 00:18:43.955 "nvme_version": "1.4" 00:18:43.955 }, 00:18:43.955 "ns_data": { 00:18:43.955 "id": 1, 00:18:43.955 "can_share": false 00:18:43.955 } 00:18:43.955 } 00:18:43.955 ], 00:18:43.955 "mp_policy": "active_passive" 00:18:43.955 } 00:18:43.955 } 00:18:43.955 ]' 00:18:43.955 10:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:18:43.955 10:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bs=4096 00:18:43.955 10:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:18:43.955 10:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # nb=1310720 00:18:43.955 10:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_size=5120 00:18:43.955 10:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # echo 5120 00:18:43.955 10:07:33 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:43.955 10:07:33 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:43.955 10:07:33 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:43.955 10:07:33 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:43.955 10:07:33 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:44.214 10:07:33 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:44.214 10:07:33 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:44.473 10:07:33 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=4a6b207b-eb25-4290-bdb7-dba81b5a0aa0 00:18:44.473 10:07:33 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4a6b207b-eb25-4290-bdb7-dba81b5a0aa0 00:18:44.731 10:07:34 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=4a088d7a-7ed2-43ec-ac33-875d339df0be 00:18:44.731 10:07:34 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4a088d7a-7ed2-43ec-ac33-875d339df0be 00:18:44.731 10:07:34 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:44.731 10:07:34 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:44.731 10:07:34 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=4a088d7a-7ed2-43ec-ac33-875d339df0be 00:18:44.731 10:07:34 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:44.731 10:07:34 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 4a088d7a-7ed2-43ec-ac33-875d339df0be 00:18:44.731 10:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local bdev_name=4a088d7a-7ed2-43ec-ac33-875d339df0be 00:18:44.731 10:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_info 00:18:44.731 10:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bs 00:18:44.731 10:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local nb 00:18:44.731 10:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4a088d7a-7ed2-43ec-ac33-875d339df0be 00:18:44.989 10:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:18:44.989 { 00:18:44.989 "name": "4a088d7a-7ed2-43ec-ac33-875d339df0be", 00:18:44.989 "aliases": [ 00:18:44.989 "lvs/nvme0n1p0" 00:18:44.989 ], 00:18:44.989 "product_name": "Logical Volume", 00:18:44.989 "block_size": 4096, 00:18:44.989 "num_blocks": 26476544, 00:18:44.989 "uuid": "4a088d7a-7ed2-43ec-ac33-875d339df0be", 00:18:44.989 "assigned_rate_limits": { 00:18:44.989 "rw_ios_per_sec": 0, 00:18:44.989 "rw_mbytes_per_sec": 0, 00:18:44.989 "r_mbytes_per_sec": 0, 00:18:44.989 "w_mbytes_per_sec": 0 00:18:44.989 }, 00:18:44.989 "claimed": false, 00:18:44.989 "zoned": false, 00:18:44.989 "supported_io_types": { 00:18:44.989 "read": true, 00:18:44.989 "write": true, 00:18:44.989 "unmap": true, 00:18:44.989 "write_zeroes": true, 00:18:44.989 "flush": false, 00:18:44.990 "reset": true, 00:18:44.990 "compare": false, 00:18:44.990 "compare_and_write": false, 00:18:44.990 "abort": false, 00:18:44.990 "nvme_admin": false, 00:18:44.990 "nvme_io": false 00:18:44.990 }, 00:18:44.990 "driver_specific": { 00:18:44.990 "lvol": { 00:18:44.990 "lvol_store_uuid": "4a6b207b-eb25-4290-bdb7-dba81b5a0aa0", 00:18:44.990 "base_bdev": "nvme0n1", 00:18:44.990 "thin_provision": true, 00:18:44.990 "num_allocated_clusters": 0, 00:18:44.990 "snapshot": false, 00:18:44.990 "clone": false, 00:18:44.990 "esnap_clone": false 00:18:44.990 } 00:18:44.990 } 00:18:44.990 } 00:18:44.990 ]' 00:18:44.990 10:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:18:45.247 10:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bs=4096 00:18:45.247 10:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:18:45.247 10:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # nb=26476544 00:18:45.247 10:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_size=103424 00:18:45.247 10:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # echo 103424 00:18:45.247 10:07:34 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:45.247 10:07:34 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:45.247 10:07:34 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:45.505 10:07:34 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:45.505 10:07:34 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:45.505 10:07:34 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 4a088d7a-7ed2-43ec-ac33-875d339df0be 00:18:45.505 10:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local bdev_name=4a088d7a-7ed2-43ec-ac33-875d339df0be 00:18:45.505 10:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_info 00:18:45.505 10:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bs 00:18:45.505 10:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local nb 00:18:45.505 10:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4a088d7a-7ed2-43ec-ac33-875d339df0be 00:18:45.763 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:18:45.763 { 00:18:45.763 "name": "4a088d7a-7ed2-43ec-ac33-875d339df0be", 00:18:45.763 "aliases": [ 00:18:45.763 "lvs/nvme0n1p0" 00:18:45.763 ], 00:18:45.763 "product_name": "Logical Volume", 00:18:45.763 "block_size": 4096, 00:18:45.763 "num_blocks": 26476544, 00:18:45.763 "uuid": "4a088d7a-7ed2-43ec-ac33-875d339df0be", 00:18:45.763 "assigned_rate_limits": { 00:18:45.763 "rw_ios_per_sec": 0, 00:18:45.763 "rw_mbytes_per_sec": 0, 00:18:45.763 "r_mbytes_per_sec": 0, 00:18:45.763 "w_mbytes_per_sec": 0 00:18:45.763 }, 00:18:45.763 "claimed": false, 00:18:45.763 "zoned": false, 00:18:45.763 "supported_io_types": { 00:18:45.763 "read": true, 00:18:45.763 "write": true, 00:18:45.763 "unmap": true, 00:18:45.763 "write_zeroes": true, 00:18:45.763 "flush": false, 00:18:45.763 "reset": true, 00:18:45.763 "compare": false, 00:18:45.763 "compare_and_write": false, 00:18:45.763 "abort": false, 00:18:45.763 "nvme_admin": false, 00:18:45.763 "nvme_io": false 00:18:45.763 }, 00:18:45.763 "driver_specific": { 00:18:45.763 "lvol": { 00:18:45.763 "lvol_store_uuid": "4a6b207b-eb25-4290-bdb7-dba81b5a0aa0", 00:18:45.763 "base_bdev": "nvme0n1", 00:18:45.763 "thin_provision": true, 00:18:45.763 "num_allocated_clusters": 0, 00:18:45.763 "snapshot": false, 00:18:45.763 "clone": false, 00:18:45.763 "esnap_clone": false 00:18:45.763 } 00:18:45.763 } 00:18:45.763 } 00:18:45.763 ]' 00:18:45.763 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:18:45.763 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bs=4096 00:18:45.763 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:18:45.763 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # nb=26476544 00:18:45.763 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_size=103424 00:18:45.764 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # echo 103424 00:18:45.764 10:07:35 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:45.764 10:07:35 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:46.021 10:07:35 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:46.021 10:07:35 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:46.021 10:07:35 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:46.021 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:46.021 10:07:35 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 4a088d7a-7ed2-43ec-ac33-875d339df0be 00:18:46.021 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local bdev_name=4a088d7a-7ed2-43ec-ac33-875d339df0be 00:18:46.021 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_info 00:18:46.021 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bs 00:18:46.021 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local nb 00:18:46.021 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4a088d7a-7ed2-43ec-ac33-875d339df0be 00:18:46.280 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:18:46.280 { 00:18:46.280 "name": "4a088d7a-7ed2-43ec-ac33-875d339df0be", 00:18:46.280 "aliases": [ 00:18:46.280 "lvs/nvme0n1p0" 00:18:46.280 ], 00:18:46.280 "product_name": "Logical Volume", 00:18:46.280 "block_size": 4096, 00:18:46.280 "num_blocks": 26476544, 00:18:46.280 "uuid": "4a088d7a-7ed2-43ec-ac33-875d339df0be", 00:18:46.280 "assigned_rate_limits": { 00:18:46.280 "rw_ios_per_sec": 0, 00:18:46.280 "rw_mbytes_per_sec": 0, 00:18:46.280 "r_mbytes_per_sec": 0, 00:18:46.280 "w_mbytes_per_sec": 0 00:18:46.280 }, 00:18:46.280 "claimed": false, 00:18:46.280 "zoned": false, 00:18:46.280 "supported_io_types": { 00:18:46.280 "read": true, 00:18:46.280 "write": true, 00:18:46.280 "unmap": true, 00:18:46.280 "write_zeroes": true, 00:18:46.280 "flush": false, 00:18:46.280 "reset": true, 00:18:46.280 "compare": false, 00:18:46.280 "compare_and_write": false, 00:18:46.280 "abort": false, 00:18:46.280 "nvme_admin": false, 00:18:46.280 "nvme_io": false 00:18:46.280 }, 00:18:46.280 "driver_specific": { 00:18:46.280 "lvol": { 00:18:46.280 "lvol_store_uuid": "4a6b207b-eb25-4290-bdb7-dba81b5a0aa0", 00:18:46.280 "base_bdev": "nvme0n1", 00:18:46.280 "thin_provision": true, 00:18:46.280 "num_allocated_clusters": 0, 00:18:46.280 "snapshot": false, 00:18:46.280 "clone": false, 00:18:46.280 "esnap_clone": false 00:18:46.280 } 00:18:46.280 } 00:18:46.280 } 00:18:46.280 ]' 00:18:46.280 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:18:46.280 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bs=4096 00:18:46.280 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:18:46.538 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # nb=26476544 00:18:46.538 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_size=103424 00:18:46.538 10:07:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # echo 103424 00:18:46.538 10:07:35 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:46.538 10:07:35 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:46.538 10:07:35 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4a088d7a-7ed2-43ec-ac33-875d339df0be -c nvc0n1p0 --l2p_dram_limit 60 00:18:46.798 [2024-06-10 10:07:36.096819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.798 [2024-06-10 10:07:36.096883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:46.798 [2024-06-10 10:07:36.096910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:46.798 [2024-06-10 10:07:36.096925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.798 [2024-06-10 10:07:36.097018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.798 [2024-06-10 10:07:36.097040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:46.798 [2024-06-10 10:07:36.097056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:46.798 [2024-06-10 10:07:36.097069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.798 [2024-06-10 10:07:36.097111] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:46.798 [2024-06-10 10:07:36.098182] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:46.798 [2024-06-10 10:07:36.098227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.798 [2024-06-10 10:07:36.098243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:46.798 [2024-06-10 10:07:36.098263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:18:46.798 [2024-06-10 10:07:36.098276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.798 [2024-06-10 10:07:36.098523] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 77494017-62af-418b-b177-661f4a90c035 00:18:46.798 [2024-06-10 10:07:36.100442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.798 [2024-06-10 10:07:36.100533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:46.798 [2024-06-10 10:07:36.100573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:46.798 [2024-06-10 10:07:36.100612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.798 [2024-06-10 10:07:36.107895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.798 [2024-06-10 10:07:36.108012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:46.798 [2024-06-10 10:07:36.108052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.026 ms 00:18:46.798 [2024-06-10 10:07:36.108091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.798 [2024-06-10 10:07:36.108373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.798 [2024-06-10 10:07:36.108429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:46.798 [2024-06-10 10:07:36.108461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:18:46.798 [2024-06-10 10:07:36.108492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.798 [2024-06-10 10:07:36.108748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.798 [2024-06-10 10:07:36.109012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:46.798 [2024-06-10 10:07:36.109200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:46.798 [2024-06-10 10:07:36.109664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.798 [2024-06-10 10:07:36.109956] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:46.798 [2024-06-10 10:07:36.116090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.798 [2024-06-10 10:07:36.116408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:46.798 [2024-06-10 10:07:36.116590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.152 ms 00:18:46.798 [2024-06-10 10:07:36.116726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.798 [2024-06-10 10:07:36.116895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.798 [2024-06-10 10:07:36.117170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:46.798 [2024-06-10 10:07:36.117358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:46.798 [2024-06-10 10:07:36.117625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.798 [2024-06-10 10:07:36.117890] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:46.798 [2024-06-10 10:07:36.118127] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:46.798 [2024-06-10 10:07:36.118161] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:46.798 [2024-06-10 10:07:36.118183] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:46.798 [2024-06-10 10:07:36.118209] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:46.798 [2024-06-10 10:07:36.118228] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:46.798 [2024-06-10 10:07:36.118250] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:46.798 [2024-06-10 10:07:36.118266] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:46.798 [2024-06-10 10:07:36.118283] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:46.798 [2024-06-10 10:07:36.118298] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:46.798 [2024-06-10 10:07:36.118321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.798 [2024-06-10 10:07:36.118336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:46.798 [2024-06-10 10:07:36.118355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:18:46.798 [2024-06-10 10:07:36.118370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.798 [2024-06-10 10:07:36.118506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.798 [2024-06-10 10:07:36.118525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:46.798 [2024-06-10 10:07:36.118544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:46.798 [2024-06-10 10:07:36.118559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.798 [2024-06-10 10:07:36.119981] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:46.798 [2024-06-10 10:07:36.120271] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:46.798 [2024-06-10 10:07:36.120415] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:46.798 [2024-06-10 10:07:36.120516] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.798 [2024-06-10 10:07:36.120662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:46.798 [2024-06-10 10:07:36.120915] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:46.798 [2024-06-10 10:07:36.121052] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:46.798 [2024-06-10 10:07:36.121165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:46.798 [2024-06-10 10:07:36.121262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:46.798 [2024-06-10 10:07:36.121542] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:46.798 [2024-06-10 10:07:36.121755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:46.798 [2024-06-10 10:07:36.121869] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:46.798 [2024-06-10 10:07:36.121970] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:46.798 [2024-06-10 10:07:36.122199] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:46.798 [2024-06-10 10:07:36.122331] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:46.798 [2024-06-10 10:07:36.122359] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.798 [2024-06-10 10:07:36.122379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:46.798 [2024-06-10 10:07:36.122395] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:46.798 [2024-06-10 10:07:36.122415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.798 [2024-06-10 10:07:36.122431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:46.798 [2024-06-10 10:07:36.122448] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:46.798 [2024-06-10 10:07:36.122463] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.798 [2024-06-10 10:07:36.122480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:46.798 [2024-06-10 10:07:36.122495] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:46.798 [2024-06-10 10:07:36.122511] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.798 [2024-06-10 10:07:36.122526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:46.798 [2024-06-10 10:07:36.122543] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:46.798 [2024-06-10 10:07:36.122558] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.798 [2024-06-10 10:07:36.122575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:46.798 [2024-06-10 10:07:36.122589] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:46.798 [2024-06-10 10:07:36.122606] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.798 [2024-06-10 10:07:36.122622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:46.798 [2024-06-10 10:07:36.122668] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:46.798 [2024-06-10 10:07:36.122692] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:46.798 [2024-06-10 10:07:36.122725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:46.798 [2024-06-10 10:07:36.122750] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:46.798 [2024-06-10 10:07:36.122771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:46.798 [2024-06-10 10:07:36.122787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:46.798 [2024-06-10 10:07:36.122804] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:46.798 [2024-06-10 10:07:36.122819] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.799 [2024-06-10 10:07:36.122836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:46.799 [2024-06-10 10:07:36.122851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:46.799 [2024-06-10 10:07:36.122872] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.799 [2024-06-10 10:07:36.122887] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:46.799 [2024-06-10 10:07:36.122906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:46.799 [2024-06-10 10:07:36.122921] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:46.799 [2024-06-10 10:07:36.122939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.799 [2024-06-10 10:07:36.122980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:46.799 [2024-06-10 10:07:36.123000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:46.799 [2024-06-10 10:07:36.123016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:46.799 [2024-06-10 10:07:36.123037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:46.799 [2024-06-10 10:07:36.123052] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:46.799 [2024-06-10 10:07:36.123070] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:46.799 [2024-06-10 10:07:36.123091] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:46.799 [2024-06-10 10:07:36.123114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:46.799 [2024-06-10 10:07:36.123132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:46.799 [2024-06-10 10:07:36.123167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:46.799 [2024-06-10 10:07:36.123189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:46.799 [2024-06-10 10:07:36.123207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:46.799 [2024-06-10 10:07:36.123226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:46.799 [2024-06-10 10:07:36.123259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:46.799 [2024-06-10 10:07:36.123275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:46.799 [2024-06-10 10:07:36.123293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:46.799 [2024-06-10 10:07:36.123309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:46.799 [2024-06-10 10:07:36.123327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:46.799 [2024-06-10 10:07:36.123342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:46.799 [2024-06-10 10:07:36.123362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:46.799 [2024-06-10 10:07:36.123378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:46.799 [2024-06-10 10:07:36.123396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:46.799 [2024-06-10 10:07:36.123411] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:46.799 [2024-06-10 10:07:36.123431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:46.799 [2024-06-10 10:07:36.123450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:46.799 [2024-06-10 10:07:36.123469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:46.799 [2024-06-10 10:07:36.123484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:46.799 [2024-06-10 10:07:36.123509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:46.799 [2024-06-10 10:07:36.123540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.799 [2024-06-10 10:07:36.123572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:46.799 [2024-06-10 10:07:36.123601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.649 ms 00:18:46.799 [2024-06-10 10:07:36.123632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.799 [2024-06-10 10:07:36.123822] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:46.799 [2024-06-10 10:07:36.123857] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:50.080 [2024-06-10 10:07:38.981439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:38.982096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:50.080 [2024-06-10 10:07:38.982387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2857.645 ms 00:18:50.080 [2024-06-10 10:07:38.982505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.017182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.017617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:50.080 [2024-06-10 10:07:39.017919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.285 ms 00:18:50.080 [2024-06-10 10:07:39.018180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.018579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.018832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:50.080 [2024-06-10 10:07:39.019086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:50.080 [2024-06-10 10:07:39.019334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.070314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.070592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:50.080 [2024-06-10 10:07:39.070769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.681 ms 00:18:50.080 [2024-06-10 10:07:39.070892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.071054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.071269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:50.080 [2024-06-10 10:07:39.071406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:50.080 [2024-06-10 10:07:39.071571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.072219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.072535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:50.080 [2024-06-10 10:07:39.072772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:18:50.080 [2024-06-10 10:07:39.072952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.073441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.073755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:50.080 [2024-06-10 10:07:39.074014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:18:50.080 [2024-06-10 10:07:39.074134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.094075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.094392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:50.080 [2024-06-10 10:07:39.094514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.778 ms 00:18:50.080 [2024-06-10 10:07:39.094602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.109102] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:50.080 [2024-06-10 10:07:39.124943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.125436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:50.080 [2024-06-10 10:07:39.125743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.074 ms 00:18:50.080 [2024-06-10 10:07:39.125974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.181353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.181857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:50.080 [2024-06-10 10:07:39.181975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.093 ms 00:18:50.080 [2024-06-10 10:07:39.182019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.182337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.182356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:50.080 [2024-06-10 10:07:39.182390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:18:50.080 [2024-06-10 10:07:39.182402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.215664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.215764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:50.080 [2024-06-10 10:07:39.215788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.182 ms 00:18:50.080 [2024-06-10 10:07:39.215803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.246924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.246970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:50.080 [2024-06-10 10:07:39.246994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.054 ms 00:18:50.080 [2024-06-10 10:07:39.247008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.247785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.247822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:50.080 [2024-06-10 10:07:39.247843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:18:50.080 [2024-06-10 10:07:39.247856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.339592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.339674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:50.080 [2024-06-10 10:07:39.339706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.645 ms 00:18:50.080 [2024-06-10 10:07:39.339721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.372829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.372897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:50.080 [2024-06-10 10:07:39.372923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.024 ms 00:18:50.080 [2024-06-10 10:07:39.372937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.404808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.404873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:50.080 [2024-06-10 10:07:39.404898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.789 ms 00:18:50.080 [2024-06-10 10:07:39.404912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.436930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.437003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:50.080 [2024-06-10 10:07:39.437030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.933 ms 00:18:50.080 [2024-06-10 10:07:39.437045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.437139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.437161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:50.080 [2024-06-10 10:07:39.437183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:50.080 [2024-06-10 10:07:39.437196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.437345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.080 [2024-06-10 10:07:39.437365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:50.080 [2024-06-10 10:07:39.437382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:50.080 [2024-06-10 10:07:39.437395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.080 [2024-06-10 10:07:39.438586] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3341.199 ms, result 0 00:18:50.080 { 00:18:50.080 "name": "ftl0", 00:18:50.080 "uuid": "77494017-62af-418b-b177-661f4a90c035" 00:18:50.080 } 00:18:50.080 10:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:50.080 10:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # local bdev_name=ftl0 00:18:50.080 10:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:18:50.080 10:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local i 00:18:50.080 10:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:18:50.080 10:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:18:50.080 10:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:50.338 10:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:50.596 [ 00:18:50.596 { 00:18:50.596 "name": "ftl0", 00:18:50.596 "aliases": [ 00:18:50.596 "77494017-62af-418b-b177-661f4a90c035" 00:18:50.596 ], 00:18:50.596 "product_name": "FTL disk", 00:18:50.596 "block_size": 4096, 00:18:50.596 "num_blocks": 20971520, 00:18:50.596 "uuid": "77494017-62af-418b-b177-661f4a90c035", 00:18:50.596 "assigned_rate_limits": { 00:18:50.596 "rw_ios_per_sec": 0, 00:18:50.596 "rw_mbytes_per_sec": 0, 00:18:50.596 "r_mbytes_per_sec": 0, 00:18:50.596 "w_mbytes_per_sec": 0 00:18:50.596 }, 00:18:50.596 "claimed": false, 00:18:50.596 "zoned": false, 00:18:50.596 "supported_io_types": { 00:18:50.596 "read": true, 00:18:50.596 "write": true, 00:18:50.596 "unmap": true, 00:18:50.596 "write_zeroes": true, 00:18:50.596 "flush": true, 00:18:50.596 "reset": false, 00:18:50.596 "compare": false, 00:18:50.596 "compare_and_write": false, 00:18:50.596 "abort": false, 00:18:50.596 "nvme_admin": false, 00:18:50.596 "nvme_io": false 00:18:50.596 }, 00:18:50.596 "driver_specific": { 00:18:50.596 "ftl": { 00:18:50.596 "base_bdev": "4a088d7a-7ed2-43ec-ac33-875d339df0be", 00:18:50.596 "cache": "nvc0n1p0" 00:18:50.597 } 00:18:50.597 } 00:18:50.597 } 00:18:50.597 ] 00:18:50.597 10:07:40 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # return 0 00:18:50.597 10:07:40 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:50.597 10:07:40 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:51.165 10:07:40 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:51.165 10:07:40 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:51.165 [2024-06-10 10:07:40.632395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.165 [2024-06-10 10:07:40.632495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:51.165 [2024-06-10 10:07:40.632534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:51.165 [2024-06-10 10:07:40.632555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.165 [2024-06-10 10:07:40.632595] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:51.165 [2024-06-10 10:07:40.636099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.165 [2024-06-10 10:07:40.636133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:51.165 [2024-06-10 10:07:40.636168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.475 ms 00:18:51.165 [2024-06-10 10:07:40.636180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.165 [2024-06-10 10:07:40.636643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.165 [2024-06-10 10:07:40.636678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:51.165 [2024-06-10 10:07:40.636701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:18:51.165 [2024-06-10 10:07:40.636713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.165 [2024-06-10 10:07:40.640074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.165 [2024-06-10 10:07:40.640103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:51.165 [2024-06-10 10:07:40.640137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.292 ms 00:18:51.165 [2024-06-10 10:07:40.640149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.165 [2024-06-10 10:07:40.647238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.165 [2024-06-10 10:07:40.647272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:51.165 [2024-06-10 10:07:40.647292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.054 ms 00:18:51.166 [2024-06-10 10:07:40.647305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.166 [2024-06-10 10:07:40.680827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.166 [2024-06-10 10:07:40.680885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:51.166 [2024-06-10 10:07:40.680910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.398 ms 00:18:51.166 [2024-06-10 10:07:40.680924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.426 [2024-06-10 10:07:40.700032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.426 [2024-06-10 10:07:40.700092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:51.426 [2024-06-10 10:07:40.700124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.021 ms 00:18:51.426 [2024-06-10 10:07:40.700139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.426 [2024-06-10 10:07:40.700389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.426 [2024-06-10 10:07:40.700412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:51.426 [2024-06-10 10:07:40.700452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:18:51.426 [2024-06-10 10:07:40.700466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.426 [2024-06-10 10:07:40.732753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.426 [2024-06-10 10:07:40.732814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:51.426 [2024-06-10 10:07:40.732839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.243 ms 00:18:51.426 [2024-06-10 10:07:40.732852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.426 [2024-06-10 10:07:40.765147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.426 [2024-06-10 10:07:40.765230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:51.426 [2024-06-10 10:07:40.765271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.222 ms 00:18:51.426 [2024-06-10 10:07:40.765285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.426 [2024-06-10 10:07:40.797320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.426 [2024-06-10 10:07:40.797370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:51.426 [2024-06-10 10:07:40.797411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.971 ms 00:18:51.426 [2024-06-10 10:07:40.797425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.426 [2024-06-10 10:07:40.831002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.426 [2024-06-10 10:07:40.831067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:51.426 [2024-06-10 10:07:40.831093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.428 ms 00:18:51.426 [2024-06-10 10:07:40.831107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.426 [2024-06-10 10:07:40.831181] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:51.426 [2024-06-10 10:07:40.831227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:51.426 [2024-06-10 10:07:40.831748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.831762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.831779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.831792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.831807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.831821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.831837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.831851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.831866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.831879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.831897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.831910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.831926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.831940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.831955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.831968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.831985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.831999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:51.427 [2024-06-10 10:07:40.832830] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:51.427 [2024-06-10 10:07:40.832845] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 77494017-62af-418b-b177-661f4a90c035 00:18:51.427 [2024-06-10 10:07:40.832859] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:51.427 [2024-06-10 10:07:40.832873] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:51.427 [2024-06-10 10:07:40.832888] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:51.427 [2024-06-10 10:07:40.832905] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:51.427 [2024-06-10 10:07:40.832918] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:51.427 [2024-06-10 10:07:40.832932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:51.428 [2024-06-10 10:07:40.832945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:51.428 [2024-06-10 10:07:40.832959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:51.428 [2024-06-10 10:07:40.832970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:51.428 [2024-06-10 10:07:40.832986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.428 [2024-06-10 10:07:40.832999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:51.428 [2024-06-10 10:07:40.833015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.809 ms 00:18:51.428 [2024-06-10 10:07:40.833028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.428 [2024-06-10 10:07:40.849872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.428 [2024-06-10 10:07:40.849920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:51.428 [2024-06-10 10:07:40.849944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.759 ms 00:18:51.428 [2024-06-10 10:07:40.849958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.428 [2024-06-10 10:07:40.850413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.428 [2024-06-10 10:07:40.850444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:51.428 [2024-06-10 10:07:40.850466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:18:51.428 [2024-06-10 10:07:40.850480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.428 [2024-06-10 10:07:40.910718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.428 [2024-06-10 10:07:40.910787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:51.428 [2024-06-10 10:07:40.910812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.428 [2024-06-10 10:07:40.910826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.428 [2024-06-10 10:07:40.910921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.428 [2024-06-10 10:07:40.910938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:51.428 [2024-06-10 10:07:40.910954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.428 [2024-06-10 10:07:40.910967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.428 [2024-06-10 10:07:40.911113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.428 [2024-06-10 10:07:40.911137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:51.428 [2024-06-10 10:07:40.911165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.428 [2024-06-10 10:07:40.911181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.428 [2024-06-10 10:07:40.911223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.428 [2024-06-10 10:07:40.911239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:51.428 [2024-06-10 10:07:40.911256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.428 [2024-06-10 10:07:40.911269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.687 [2024-06-10 10:07:41.017423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.687 [2024-06-10 10:07:41.017491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:51.687 [2024-06-10 10:07:41.017531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.687 [2024-06-10 10:07:41.017545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.687 [2024-06-10 10:07:41.103536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.687 [2024-06-10 10:07:41.103603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:51.687 [2024-06-10 10:07:41.103628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.687 [2024-06-10 10:07:41.103665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.687 [2024-06-10 10:07:41.103782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.687 [2024-06-10 10:07:41.103803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:51.687 [2024-06-10 10:07:41.103823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.687 [2024-06-10 10:07:41.103836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.687 [2024-06-10 10:07:41.103919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.687 [2024-06-10 10:07:41.103938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:51.687 [2024-06-10 10:07:41.103956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.687 [2024-06-10 10:07:41.103969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.687 [2024-06-10 10:07:41.104119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.687 [2024-06-10 10:07:41.104146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:51.687 [2024-06-10 10:07:41.104164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.687 [2024-06-10 10:07:41.104181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.687 [2024-06-10 10:07:41.104257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.687 [2024-06-10 10:07:41.104277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:51.687 [2024-06-10 10:07:41.104293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.687 [2024-06-10 10:07:41.104306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.687 [2024-06-10 10:07:41.104369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.687 [2024-06-10 10:07:41.104386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:51.687 [2024-06-10 10:07:41.104401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.687 [2024-06-10 10:07:41.104416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.687 [2024-06-10 10:07:41.104481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.687 [2024-06-10 10:07:41.104498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:51.687 [2024-06-10 10:07:41.104517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.688 [2024-06-10 10:07:41.104530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.688 [2024-06-10 10:07:41.104748] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 472.304 ms, result 0 00:18:51.688 true 00:18:51.688 10:07:41 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 79174 00:18:51.688 10:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@949 -- # '[' -z 79174 ']' 00:18:51.688 10:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # kill -0 79174 00:18:51.688 10:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # uname 00:18:51.688 10:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:51.688 10:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 79174 00:18:51.688 killing process with pid 79174 00:18:51.688 10:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:51.688 10:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:51.688 10:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 79174' 00:18:51.688 10:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # kill 79174 00:18:51.688 10:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # wait 79174 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1355 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1338 -- # local sanitizers 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # shift 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local asan_lib= 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # grep libasan 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # break 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:57.004 10:07:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:57.004 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:57.004 fio-3.35 00:18:57.004 Starting 1 thread 00:19:02.271 00:19:02.271 test: (groupid=0, jobs=1): err= 0: pid=79386: Mon Jun 10 10:07:50 2024 00:19:02.271 read: IOPS=965, BW=64.1MiB/s (67.3MB/s)(255MiB/3968msec) 00:19:02.271 slat (nsec): min=5793, max=46995, avg=8222.81, stdev=3715.92 00:19:02.271 clat (usec): min=320, max=997, avg=462.93, stdev=59.05 00:19:02.271 lat (usec): min=335, max=1003, avg=471.15, stdev=59.74 00:19:02.271 clat percentiles (usec): 00:19:02.271 | 1.00th=[ 359], 5.00th=[ 371], 10.00th=[ 383], 20.00th=[ 424], 00:19:02.271 | 30.00th=[ 441], 40.00th=[ 445], 50.00th=[ 453], 60.00th=[ 465], 00:19:02.271 | 70.00th=[ 486], 80.00th=[ 506], 90.00th=[ 537], 95.00th=[ 570], 00:19:02.271 | 99.00th=[ 635], 99.50th=[ 685], 99.90th=[ 783], 99.95th=[ 930], 00:19:02.271 | 99.99th=[ 996] 00:19:02.271 write: IOPS=972, BW=64.6MiB/s (67.7MB/s)(256MiB/3964msec); 0 zone resets 00:19:02.271 slat (nsec): min=19605, max=94070, avg=26094.05, stdev=6900.95 00:19:02.271 clat (usec): min=354, max=920, avg=519.55, stdev=66.58 00:19:02.271 lat (usec): min=378, max=944, avg=545.65, stdev=66.71 00:19:02.271 clat percentiles (usec): 00:19:02.271 | 1.00th=[ 392], 5.00th=[ 420], 10.00th=[ 453], 20.00th=[ 469], 00:19:02.271 | 30.00th=[ 478], 40.00th=[ 494], 50.00th=[ 515], 60.00th=[ 529], 00:19:02.271 | 70.00th=[ 545], 80.00th=[ 570], 90.00th=[ 603], 95.00th=[ 635], 00:19:02.271 | 99.00th=[ 734], 99.50th=[ 758], 99.90th=[ 857], 99.95th=[ 889], 00:19:02.271 | 99.99th=[ 922] 00:19:02.271 bw ( KiB/s): min=64056, max=68272, per=100.00%, avg=66368.00, stdev=1597.64, samples=7 00:19:02.271 iops : min= 942, max= 1004, avg=976.00, stdev=23.49, samples=7 00:19:02.271 lat (usec) : 500=60.22%, 750=39.41%, 1000=0.38% 00:19:02.271 cpu : usr=99.07%, sys=0.10%, ctx=9, majf=0, minf=1171 00:19:02.271 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.271 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.271 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:02.271 00:19:02.271 Run status group 0 (all jobs): 00:19:02.271 READ: bw=64.1MiB/s (67.3MB/s), 64.1MiB/s-64.1MiB/s (67.3MB/s-67.3MB/s), io=255MiB (267MB), run=3968-3968msec 00:19:02.271 WRITE: bw=64.6MiB/s (67.7MB/s), 64.6MiB/s-64.6MiB/s (67.7MB/s-67.7MB/s), io=256MiB (269MB), run=3964-3964msec 00:19:03.205 ----------------------------------------------------- 00:19:03.205 Suppressions used: 00:19:03.205 count bytes template 00:19:03.205 1 5 /usr/src/fio/parse.c 00:19:03.205 1 8 libtcmalloc_minimal.so 00:19:03.205 1 904 libcrypto.so 00:19:03.205 ----------------------------------------------------- 00:19:03.205 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1355 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1338 -- # local sanitizers 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # shift 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local asan_lib= 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # grep libasan 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # break 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:03.205 10:07:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:03.463 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:03.463 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:03.463 fio-3.35 00:19:03.463 Starting 2 threads 00:19:35.561 00:19:35.561 first_half: (groupid=0, jobs=1): err= 0: pid=79489: Mon Jun 10 10:08:23 2024 00:19:35.561 read: IOPS=2204, BW=8819KiB/s (9031kB/s)(255MiB/29590msec) 00:19:35.561 slat (nsec): min=4473, max=49433, avg=7473.03, stdev=2273.86 00:19:35.561 clat (usec): min=936, max=315001, avg=42562.53, stdev=23302.76 00:19:35.561 lat (usec): min=946, max=315008, avg=42570.00, stdev=23302.98 00:19:35.561 clat percentiles (msec): 00:19:35.561 | 1.00th=[ 11], 5.00th=[ 20], 10.00th=[ 39], 20.00th=[ 39], 00:19:35.561 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:19:35.561 | 70.00th=[ 41], 80.00th=[ 43], 90.00th=[ 46], 95.00th=[ 52], 00:19:35.561 | 99.00th=[ 174], 99.50th=[ 215], 99.90th=[ 271], 99.95th=[ 292], 00:19:35.561 | 99.99th=[ 305] 00:19:35.561 write: IOPS=2581, BW=10.1MiB/s (10.6MB/s)(256MiB/25391msec); 0 zone resets 00:19:35.561 slat (usec): min=5, max=567, avg= 9.68, stdev= 6.18 00:19:35.561 clat (usec): min=471, max=114025, avg=15392.23, stdev=25742.34 00:19:35.561 lat (usec): min=490, max=114035, avg=15401.91, stdev=25742.64 00:19:35.561 clat percentiles (usec): 00:19:35.561 | 1.00th=[ 996], 5.00th=[ 1336], 10.00th=[ 1582], 20.00th=[ 2114], 00:19:35.561 | 30.00th=[ 4080], 40.00th=[ 5932], 50.00th=[ 6849], 60.00th=[ 7635], 00:19:35.561 | 70.00th=[ 8848], 80.00th=[ 13960], 90.00th=[ 46924], 95.00th=[ 93848], 00:19:35.561 | 99.00th=[104334], 99.50th=[106431], 99.90th=[110625], 99.95th=[111674], 00:19:35.561 | 99.99th=[113771] 00:19:35.561 bw ( KiB/s): min= 968, max=40360, per=90.68%, avg=18724.57, stdev=10963.19, samples=28 00:19:35.561 iops : min= 242, max=10090, avg=4681.14, stdev=2740.80, samples=28 00:19:35.561 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.45% 00:19:35.561 lat (msec) : 2=8.65%, 4=5.85%, 10=22.57%, 20=8.85%, 50=45.85% 00:19:35.561 lat (msec) : 100=5.29%, 250=2.36%, 500=0.07% 00:19:35.561 cpu : usr=99.08%, sys=0.28%, ctx=45, majf=0, minf=5533 00:19:35.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:35.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.561 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:35.561 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.561 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:35.561 second_half: (groupid=0, jobs=1): err= 0: pid=79490: Mon Jun 10 10:08:23 2024 00:19:35.561 read: IOPS=2220, BW=8884KiB/s (9097kB/s)(254MiB/29332msec) 00:19:35.561 slat (nsec): min=4298, max=63200, avg=7307.34, stdev=2274.43 00:19:35.561 clat (usec): min=900, max=274943, avg=44111.96, stdev=22089.06 00:19:35.561 lat (usec): min=908, max=274951, avg=44119.27, stdev=22089.24 00:19:35.561 clat percentiles (msec): 00:19:35.561 | 1.00th=[ 7], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 39], 00:19:35.561 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:19:35.561 | 70.00th=[ 41], 80.00th=[ 44], 90.00th=[ 47], 95.00th=[ 55], 00:19:35.561 | 99.00th=[ 174], 99.50th=[ 192], 99.90th=[ 218], 99.95th=[ 228], 00:19:35.561 | 99.99th=[ 239] 00:19:35.561 write: IOPS=3422, BW=13.4MiB/s (14.0MB/s)(256MiB/19147msec); 0 zone resets 00:19:35.561 slat (usec): min=5, max=510, avg= 9.48, stdev= 5.53 00:19:35.561 clat (usec): min=433, max=114884, avg=13416.03, stdev=24998.08 00:19:35.561 lat (usec): min=451, max=114895, avg=13425.52, stdev=24998.17 00:19:35.561 clat percentiles (usec): 00:19:35.561 | 1.00th=[ 1106], 5.00th=[ 1385], 10.00th=[ 1565], 20.00th=[ 1876], 00:19:35.561 | 30.00th=[ 2278], 40.00th=[ 3687], 50.00th=[ 5080], 60.00th=[ 6718], 00:19:35.561 | 70.00th=[ 8356], 80.00th=[ 13435], 90.00th=[ 17695], 95.00th=[ 92799], 00:19:35.561 | 99.00th=[104334], 99.50th=[106431], 99.90th=[109577], 99.95th=[112722], 00:19:35.561 | 99.99th=[113771] 00:19:35.561 bw ( KiB/s): min= 1720, max=39912, per=100.00%, avg=21843.33, stdev=9965.29, samples=24 00:19:35.561 iops : min= 430, max= 9978, avg=5460.83, stdev=2491.32, samples=24 00:19:35.561 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.22% 00:19:35.561 lat (msec) : 2=11.79%, 4=9.98%, 10=15.81%, 20=8.54%, 50=45.72% 00:19:35.561 lat (msec) : 100=5.02%, 250=2.89%, 500=0.01% 00:19:35.561 cpu : usr=99.11%, sys=0.24%, ctx=60, majf=0, minf=5578 00:19:35.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:35.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.561 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:35.561 issued rwts: total=65143,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.561 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:35.561 00:19:35.561 Run status group 0 (all jobs): 00:19:35.561 READ: bw=17.2MiB/s (18.0MB/s), 8819KiB/s-8884KiB/s (9031kB/s-9097kB/s), io=509MiB (534MB), run=29332-29590msec 00:19:35.561 WRITE: bw=20.2MiB/s (21.1MB/s), 10.1MiB/s-13.4MiB/s (10.6MB/s-14.0MB/s), io=512MiB (537MB), run=19147-25391msec 00:19:36.537 ----------------------------------------------------- 00:19:36.537 Suppressions used: 00:19:36.537 count bytes template 00:19:36.537 2 10 /usr/src/fio/parse.c 00:19:36.537 2 192 /usr/src/fio/iolog.c 00:19:36.537 1 8 libtcmalloc_minimal.so 00:19:36.537 1 904 libcrypto.so 00:19:36.537 ----------------------------------------------------- 00:19:36.537 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1355 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1338 -- # local sanitizers 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # shift 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local asan_lib= 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # grep libasan 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # break 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:36.537 10:08:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:36.537 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:36.537 fio-3.35 00:19:36.537 Starting 1 thread 00:19:54.676 00:19:54.676 test: (groupid=0, jobs=1): err= 0: pid=79848: Mon Jun 10 10:08:43 2024 00:19:54.676 read: IOPS=6311, BW=24.7MiB/s (25.9MB/s)(255MiB/10330msec) 00:19:54.676 slat (nsec): min=4680, max=46770, avg=7074.21, stdev=2295.43 00:19:54.676 clat (usec): min=773, max=40906, avg=20267.87, stdev=1003.09 00:19:54.676 lat (usec): min=778, max=40912, avg=20274.95, stdev=1003.09 00:19:54.676 clat percentiles (usec): 00:19:54.676 | 1.00th=[19268], 5.00th=[19268], 10.00th=[19530], 20.00th=[19792], 00:19:54.676 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20055], 60.00th=[20317], 00:19:54.676 | 70.00th=[20579], 80.00th=[20841], 90.00th=[21103], 95.00th=[21365], 00:19:54.676 | 99.00th=[23200], 99.50th=[23462], 99.90th=[30016], 99.95th=[35914], 00:19:54.676 | 99.99th=[40109] 00:19:54.676 write: IOPS=11.4k, BW=44.4MiB/s (46.5MB/s)(256MiB/5768msec); 0 zone resets 00:19:54.676 slat (usec): min=5, max=162, avg=10.04, stdev= 5.68 00:19:54.676 clat (usec): min=667, max=67567, avg=11203.49, stdev=14380.25 00:19:54.676 lat (usec): min=674, max=67577, avg=11213.53, stdev=14380.29 00:19:54.676 clat percentiles (usec): 00:19:54.676 | 1.00th=[ 1004], 5.00th=[ 1205], 10.00th=[ 1352], 20.00th=[ 1565], 00:19:54.676 | 30.00th=[ 1795], 40.00th=[ 2343], 50.00th=[ 7308], 60.00th=[ 8291], 00:19:54.676 | 70.00th=[ 9372], 80.00th=[10814], 90.00th=[41157], 95.00th=[45351], 00:19:54.676 | 99.00th=[49546], 99.50th=[51119], 99.90th=[62129], 99.95th=[63701], 00:19:54.676 | 99.99th=[66847] 00:19:54.676 bw ( KiB/s): min=21352, max=64096, per=96.13%, avg=43690.67, stdev=11986.20, samples=12 00:19:54.676 iops : min= 5338, max=16024, avg=10922.67, stdev=2996.55, samples=12 00:19:54.676 lat (usec) : 750=0.01%, 1000=0.49% 00:19:54.676 lat (msec) : 2=17.59%, 4=2.82%, 10=16.88%, 20=23.99%, 50=37.80% 00:19:54.676 lat (msec) : 100=0.42% 00:19:54.676 cpu : usr=98.60%, sys=0.58%, ctx=31, majf=0, minf=5567 00:19:54.676 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:54.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.676 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:54.676 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:54.676 00:19:54.676 Run status group 0 (all jobs): 00:19:54.676 READ: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=255MiB (267MB), run=10330-10330msec 00:19:54.676 WRITE: bw=44.4MiB/s (46.5MB/s), 44.4MiB/s-44.4MiB/s (46.5MB/s-46.5MB/s), io=256MiB (268MB), run=5768-5768msec 00:19:55.671 ----------------------------------------------------- 00:19:55.671 Suppressions used: 00:19:55.671 count bytes template 00:19:55.671 1 5 /usr/src/fio/parse.c 00:19:55.671 2 192 /usr/src/fio/iolog.c 00:19:55.671 1 8 libtcmalloc_minimal.so 00:19:55.671 1 904 libcrypto.so 00:19:55.671 ----------------------------------------------------- 00:19:55.671 00:19:55.671 10:08:44 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:55.671 10:08:44 ftl.ftl_fio_basic -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:55.671 10:08:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:55.671 10:08:44 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:55.671 Remove shared memory files 00:19:55.671 10:08:44 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:55.671 10:08:44 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:55.671 10:08:44 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:55.671 10:08:44 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:55.671 10:08:44 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62017 /dev/shm/spdk_tgt_trace.pid78120 00:19:55.671 10:08:44 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:55.671 10:08:44 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:55.671 ************************************ 00:19:55.671 END TEST ftl_fio_basic 00:19:55.671 ************************************ 00:19:55.671 00:19:55.671 real 1m13.772s 00:19:55.671 user 2m45.425s 00:19:55.671 sys 0m3.729s 00:19:55.671 10:08:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:55.671 10:08:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:55.671 10:08:45 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:55.671 10:08:45 ftl -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:19:55.671 10:08:45 ftl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:55.671 10:08:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:55.671 ************************************ 00:19:55.671 START TEST ftl_bdevperf 00:19:55.671 ************************************ 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:55.671 * Looking for test storage... 00:19:55.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:55.671 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=80103 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 80103 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 80103 ']' 00:19:55.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:55.672 10:08:45 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:55.930 [2024-06-10 10:08:45.232081] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:19:55.930 [2024-06-10 10:08:45.232247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80103 ] 00:19:55.930 [2024-06-10 10:08:45.397057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.189 [2024-06-10 10:08:45.588246] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.756 10:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:56.756 10:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:19:56.756 10:08:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:56.756 10:08:46 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:56.756 10:08:46 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:56.756 10:08:46 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:56.756 10:08:46 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:56.756 10:08:46 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:57.324 10:08:46 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:57.324 10:08:46 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:57.324 10:08:46 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:57.324 10:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local bdev_name=nvme0n1 00:19:57.324 10:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_info 00:19:57.324 10:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bs 00:19:57.324 10:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local nb 00:19:57.324 10:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:57.582 10:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:19:57.583 { 00:19:57.583 "name": "nvme0n1", 00:19:57.583 "aliases": [ 00:19:57.583 "c4544e34-a429-4f89-8e03-022ef9964264" 00:19:57.583 ], 00:19:57.583 "product_name": "NVMe disk", 00:19:57.583 "block_size": 4096, 00:19:57.583 "num_blocks": 1310720, 00:19:57.583 "uuid": "c4544e34-a429-4f89-8e03-022ef9964264", 00:19:57.583 "assigned_rate_limits": { 00:19:57.583 "rw_ios_per_sec": 0, 00:19:57.583 "rw_mbytes_per_sec": 0, 00:19:57.583 "r_mbytes_per_sec": 0, 00:19:57.583 "w_mbytes_per_sec": 0 00:19:57.583 }, 00:19:57.583 "claimed": true, 00:19:57.583 "claim_type": "read_many_write_one", 00:19:57.583 "zoned": false, 00:19:57.583 "supported_io_types": { 00:19:57.583 "read": true, 00:19:57.583 "write": true, 00:19:57.583 "unmap": true, 00:19:57.583 "write_zeroes": true, 00:19:57.583 "flush": true, 00:19:57.583 "reset": true, 00:19:57.583 "compare": true, 00:19:57.583 "compare_and_write": false, 00:19:57.583 "abort": true, 00:19:57.583 "nvme_admin": true, 00:19:57.583 "nvme_io": true 00:19:57.583 }, 00:19:57.583 "driver_specific": { 00:19:57.583 "nvme": [ 00:19:57.583 { 00:19:57.583 "pci_address": "0000:00:11.0", 00:19:57.583 "trid": { 00:19:57.583 "trtype": "PCIe", 00:19:57.583 "traddr": "0000:00:11.0" 00:19:57.583 }, 00:19:57.583 "ctrlr_data": { 00:19:57.583 "cntlid": 0, 00:19:57.583 "vendor_id": "0x1b36", 00:19:57.583 "model_number": "QEMU NVMe Ctrl", 00:19:57.583 "serial_number": "12341", 00:19:57.583 "firmware_revision": "8.0.0", 00:19:57.583 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:57.583 "oacs": { 00:19:57.583 "security": 0, 00:19:57.583 "format": 1, 00:19:57.583 "firmware": 0, 00:19:57.583 "ns_manage": 1 00:19:57.583 }, 00:19:57.583 "multi_ctrlr": false, 00:19:57.583 "ana_reporting": false 00:19:57.583 }, 00:19:57.583 "vs": { 00:19:57.583 "nvme_version": "1.4" 00:19:57.583 }, 00:19:57.583 "ns_data": { 00:19:57.583 "id": 1, 00:19:57.583 "can_share": false 00:19:57.583 } 00:19:57.583 } 00:19:57.583 ], 00:19:57.583 "mp_policy": "active_passive" 00:19:57.583 } 00:19:57.583 } 00:19:57.583 ]' 00:19:57.583 10:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:19:57.583 10:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bs=4096 00:19:57.583 10:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:19:57.583 10:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # nb=1310720 00:19:57.583 10:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_size=5120 00:19:57.583 10:08:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # echo 5120 00:19:57.583 10:08:46 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:57.583 10:08:46 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:57.583 10:08:46 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:57.583 10:08:46 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:57.583 10:08:46 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:57.841 10:08:47 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=4a6b207b-eb25-4290-bdb7-dba81b5a0aa0 00:19:57.841 10:08:47 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:57.841 10:08:47 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4a6b207b-eb25-4290-bdb7-dba81b5a0aa0 00:19:58.100 10:08:47 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:58.358 10:08:47 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=8ebdbdf1-4241-43f9-89ed-a8ef7f282648 00:19:58.358 10:08:47 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8ebdbdf1-4241-43f9-89ed-a8ef7f282648 00:19:58.616 10:08:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=123f8539-294f-45b3-bf28-c3eded03a066 00:19:58.616 10:08:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 123f8539-294f-45b3-bf28-c3eded03a066 00:19:58.616 10:08:48 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:58.616 10:08:48 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:58.616 10:08:48 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=123f8539-294f-45b3-bf28-c3eded03a066 00:19:58.616 10:08:48 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:58.616 10:08:48 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 123f8539-294f-45b3-bf28-c3eded03a066 00:19:58.616 10:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local bdev_name=123f8539-294f-45b3-bf28-c3eded03a066 00:19:58.616 10:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_info 00:19:58.616 10:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bs 00:19:58.616 10:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local nb 00:19:58.616 10:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 123f8539-294f-45b3-bf28-c3eded03a066 00:19:58.874 10:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:19:58.874 { 00:19:58.874 "name": "123f8539-294f-45b3-bf28-c3eded03a066", 00:19:58.874 "aliases": [ 00:19:58.874 "lvs/nvme0n1p0" 00:19:58.874 ], 00:19:58.874 "product_name": "Logical Volume", 00:19:58.874 "block_size": 4096, 00:19:58.874 "num_blocks": 26476544, 00:19:58.874 "uuid": "123f8539-294f-45b3-bf28-c3eded03a066", 00:19:58.874 "assigned_rate_limits": { 00:19:58.874 "rw_ios_per_sec": 0, 00:19:58.874 "rw_mbytes_per_sec": 0, 00:19:58.874 "r_mbytes_per_sec": 0, 00:19:58.874 "w_mbytes_per_sec": 0 00:19:58.874 }, 00:19:58.874 "claimed": false, 00:19:58.874 "zoned": false, 00:19:58.874 "supported_io_types": { 00:19:58.874 "read": true, 00:19:58.874 "write": true, 00:19:58.874 "unmap": true, 00:19:58.874 "write_zeroes": true, 00:19:58.874 "flush": false, 00:19:58.874 "reset": true, 00:19:58.874 "compare": false, 00:19:58.874 "compare_and_write": false, 00:19:58.874 "abort": false, 00:19:58.874 "nvme_admin": false, 00:19:58.874 "nvme_io": false 00:19:58.874 }, 00:19:58.874 "driver_specific": { 00:19:58.874 "lvol": { 00:19:58.874 "lvol_store_uuid": "8ebdbdf1-4241-43f9-89ed-a8ef7f282648", 00:19:58.874 "base_bdev": "nvme0n1", 00:19:58.874 "thin_provision": true, 00:19:58.874 "num_allocated_clusters": 0, 00:19:58.874 "snapshot": false, 00:19:58.874 "clone": false, 00:19:58.874 "esnap_clone": false 00:19:58.874 } 00:19:58.874 } 00:19:58.874 } 00:19:58.874 ]' 00:19:58.874 10:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:19:59.132 10:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bs=4096 00:19:59.133 10:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:19:59.133 10:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # nb=26476544 00:19:59.133 10:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_size=103424 00:19:59.133 10:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # echo 103424 00:19:59.133 10:08:48 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:59.133 10:08:48 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:59.133 10:08:48 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:59.391 10:08:48 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:59.391 10:08:48 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:59.391 10:08:48 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 123f8539-294f-45b3-bf28-c3eded03a066 00:19:59.391 10:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local bdev_name=123f8539-294f-45b3-bf28-c3eded03a066 00:19:59.391 10:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_info 00:19:59.391 10:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bs 00:19:59.391 10:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local nb 00:19:59.391 10:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 123f8539-294f-45b3-bf28-c3eded03a066 00:19:59.650 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:19:59.650 { 00:19:59.650 "name": "123f8539-294f-45b3-bf28-c3eded03a066", 00:19:59.650 "aliases": [ 00:19:59.650 "lvs/nvme0n1p0" 00:19:59.650 ], 00:19:59.650 "product_name": "Logical Volume", 00:19:59.650 "block_size": 4096, 00:19:59.650 "num_blocks": 26476544, 00:19:59.650 "uuid": "123f8539-294f-45b3-bf28-c3eded03a066", 00:19:59.650 "assigned_rate_limits": { 00:19:59.650 "rw_ios_per_sec": 0, 00:19:59.650 "rw_mbytes_per_sec": 0, 00:19:59.650 "r_mbytes_per_sec": 0, 00:19:59.650 "w_mbytes_per_sec": 0 00:19:59.650 }, 00:19:59.650 "claimed": false, 00:19:59.650 "zoned": false, 00:19:59.650 "supported_io_types": { 00:19:59.650 "read": true, 00:19:59.650 "write": true, 00:19:59.650 "unmap": true, 00:19:59.650 "write_zeroes": true, 00:19:59.650 "flush": false, 00:19:59.650 "reset": true, 00:19:59.650 "compare": false, 00:19:59.650 "compare_and_write": false, 00:19:59.650 "abort": false, 00:19:59.650 "nvme_admin": false, 00:19:59.650 "nvme_io": false 00:19:59.650 }, 00:19:59.650 "driver_specific": { 00:19:59.650 "lvol": { 00:19:59.650 "lvol_store_uuid": "8ebdbdf1-4241-43f9-89ed-a8ef7f282648", 00:19:59.650 "base_bdev": "nvme0n1", 00:19:59.650 "thin_provision": true, 00:19:59.650 "num_allocated_clusters": 0, 00:19:59.650 "snapshot": false, 00:19:59.650 "clone": false, 00:19:59.650 "esnap_clone": false 00:19:59.650 } 00:19:59.650 } 00:19:59.650 } 00:19:59.650 ]' 00:19:59.650 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:19:59.650 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bs=4096 00:19:59.650 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:19:59.916 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # nb=26476544 00:19:59.916 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_size=103424 00:19:59.916 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # echo 103424 00:19:59.916 10:08:49 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:59.916 10:08:49 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:00.180 10:08:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:20:00.180 10:08:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 123f8539-294f-45b3-bf28-c3eded03a066 00:20:00.180 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local bdev_name=123f8539-294f-45b3-bf28-c3eded03a066 00:20:00.180 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_info 00:20:00.180 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bs 00:20:00.180 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local nb 00:20:00.180 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 123f8539-294f-45b3-bf28-c3eded03a066 00:20:00.439 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:20:00.439 { 00:20:00.439 "name": "123f8539-294f-45b3-bf28-c3eded03a066", 00:20:00.439 "aliases": [ 00:20:00.439 "lvs/nvme0n1p0" 00:20:00.439 ], 00:20:00.439 "product_name": "Logical Volume", 00:20:00.439 "block_size": 4096, 00:20:00.439 "num_blocks": 26476544, 00:20:00.439 "uuid": "123f8539-294f-45b3-bf28-c3eded03a066", 00:20:00.439 "assigned_rate_limits": { 00:20:00.439 "rw_ios_per_sec": 0, 00:20:00.439 "rw_mbytes_per_sec": 0, 00:20:00.439 "r_mbytes_per_sec": 0, 00:20:00.439 "w_mbytes_per_sec": 0 00:20:00.439 }, 00:20:00.439 "claimed": false, 00:20:00.439 "zoned": false, 00:20:00.439 "supported_io_types": { 00:20:00.439 "read": true, 00:20:00.439 "write": true, 00:20:00.439 "unmap": true, 00:20:00.439 "write_zeroes": true, 00:20:00.439 "flush": false, 00:20:00.439 "reset": true, 00:20:00.439 "compare": false, 00:20:00.439 "compare_and_write": false, 00:20:00.439 "abort": false, 00:20:00.439 "nvme_admin": false, 00:20:00.439 "nvme_io": false 00:20:00.439 }, 00:20:00.439 "driver_specific": { 00:20:00.439 "lvol": { 00:20:00.439 "lvol_store_uuid": "8ebdbdf1-4241-43f9-89ed-a8ef7f282648", 00:20:00.439 "base_bdev": "nvme0n1", 00:20:00.439 "thin_provision": true, 00:20:00.439 "num_allocated_clusters": 0, 00:20:00.439 "snapshot": false, 00:20:00.439 "clone": false, 00:20:00.439 "esnap_clone": false 00:20:00.439 } 00:20:00.439 } 00:20:00.439 } 00:20:00.439 ]' 00:20:00.439 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:20:00.439 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bs=4096 00:20:00.439 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:20:00.439 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # nb=26476544 00:20:00.439 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_size=103424 00:20:00.439 10:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # echo 103424 00:20:00.439 10:08:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:20:00.439 10:08:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 123f8539-294f-45b3-bf28-c3eded03a066 -c nvc0n1p0 --l2p_dram_limit 20 00:20:00.711 [2024-06-10 10:08:50.143872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.711 [2024-06-10 10:08:50.143978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:00.711 [2024-06-10 10:08:50.144017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:00.711 [2024-06-10 10:08:50.144049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.711 [2024-06-10 10:08:50.144127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.711 [2024-06-10 10:08:50.144149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.711 [2024-06-10 10:08:50.144164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:00.711 [2024-06-10 10:08:50.144178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.711 [2024-06-10 10:08:50.144209] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:00.711 [2024-06-10 10:08:50.145250] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:00.711 [2024-06-10 10:08:50.145288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.711 [2024-06-10 10:08:50.145312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.711 [2024-06-10 10:08:50.145325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:20:00.711 [2024-06-10 10:08:50.145340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.711 [2024-06-10 10:08:50.145457] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4fb3fa36-47ee-4292-9202-3c455a832c40 00:20:00.711 [2024-06-10 10:08:50.146467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.711 [2024-06-10 10:08:50.146510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:00.711 [2024-06-10 10:08:50.146531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:00.711 [2024-06-10 10:08:50.146547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.711 [2024-06-10 10:08:50.151310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.711 [2024-06-10 10:08:50.151358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.711 [2024-06-10 10:08:50.151379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.707 ms 00:20:00.711 [2024-06-10 10:08:50.151392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.711 [2024-06-10 10:08:50.151513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.711 [2024-06-10 10:08:50.151535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.711 [2024-06-10 10:08:50.151556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:00.711 [2024-06-10 10:08:50.151569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.711 [2024-06-10 10:08:50.151677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.711 [2024-06-10 10:08:50.151698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:00.711 [2024-06-10 10:08:50.151713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:00.711 [2024-06-10 10:08:50.151726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.711 [2024-06-10 10:08:50.151761] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:00.711 [2024-06-10 10:08:50.156395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.711 [2024-06-10 10:08:50.156440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.711 [2024-06-10 10:08:50.156457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.646 ms 00:20:00.711 [2024-06-10 10:08:50.156474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.711 [2024-06-10 10:08:50.156519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.711 [2024-06-10 10:08:50.156541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:00.711 [2024-06-10 10:08:50.156557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:00.711 [2024-06-10 10:08:50.156571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.711 [2024-06-10 10:08:50.156613] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:00.711 [2024-06-10 10:08:50.156796] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:00.711 [2024-06-10 10:08:50.156819] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:00.711 [2024-06-10 10:08:50.156844] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:00.711 [2024-06-10 10:08:50.156861] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:00.711 [2024-06-10 10:08:50.156878] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:00.711 [2024-06-10 10:08:50.156891] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:00.711 [2024-06-10 10:08:50.156905] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:00.711 [2024-06-10 10:08:50.156916] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:00.711 [2024-06-10 10:08:50.156932] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:00.711 [2024-06-10 10:08:50.156946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.711 [2024-06-10 10:08:50.156960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:00.711 [2024-06-10 10:08:50.156973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:20:00.711 [2024-06-10 10:08:50.156987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.711 [2024-06-10 10:08:50.157081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.711 [2024-06-10 10:08:50.157107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:00.711 [2024-06-10 10:08:50.157122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:00.711 [2024-06-10 10:08:50.157136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.711 [2024-06-10 10:08:50.157238] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:00.711 [2024-06-10 10:08:50.157262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:00.711 [2024-06-10 10:08:50.157277] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.711 [2024-06-10 10:08:50.157292] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.711 [2024-06-10 10:08:50.157305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:00.711 [2024-06-10 10:08:50.157319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:00.711 [2024-06-10 10:08:50.157331] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:00.711 [2024-06-10 10:08:50.157359] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:00.712 [2024-06-10 10:08:50.157371] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:00.712 [2024-06-10 10:08:50.157385] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.712 [2024-06-10 10:08:50.157396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:00.712 [2024-06-10 10:08:50.157410] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:00.712 [2024-06-10 10:08:50.157422] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.712 [2024-06-10 10:08:50.157435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:00.712 [2024-06-10 10:08:50.157447] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:00.712 [2024-06-10 10:08:50.157461] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.712 [2024-06-10 10:08:50.157473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:00.712 [2024-06-10 10:08:50.157490] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:00.712 [2024-06-10 10:08:50.157503] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.712 [2024-06-10 10:08:50.157517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:00.712 [2024-06-10 10:08:50.157540] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:00.712 [2024-06-10 10:08:50.157559] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.712 [2024-06-10 10:08:50.157571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:00.712 [2024-06-10 10:08:50.157586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:00.712 [2024-06-10 10:08:50.157598] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.712 [2024-06-10 10:08:50.157611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:00.712 [2024-06-10 10:08:50.157623] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:00.712 [2024-06-10 10:08:50.157636] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.712 [2024-06-10 10:08:50.157665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:00.712 [2024-06-10 10:08:50.157680] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:00.712 [2024-06-10 10:08:50.157692] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.712 [2024-06-10 10:08:50.157706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:00.712 [2024-06-10 10:08:50.157717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:00.712 [2024-06-10 10:08:50.157734] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.712 [2024-06-10 10:08:50.157746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:00.712 [2024-06-10 10:08:50.157760] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:00.712 [2024-06-10 10:08:50.157772] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.712 [2024-06-10 10:08:50.157785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:00.712 [2024-06-10 10:08:50.157797] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:00.712 [2024-06-10 10:08:50.157810] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.712 [2024-06-10 10:08:50.157822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:00.712 [2024-06-10 10:08:50.157837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:00.712 [2024-06-10 10:08:50.157849] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.712 [2024-06-10 10:08:50.157872] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:00.712 [2024-06-10 10:08:50.157887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:00.712 [2024-06-10 10:08:50.157901] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.712 [2024-06-10 10:08:50.157914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.712 [2024-06-10 10:08:50.157928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:00.712 [2024-06-10 10:08:50.157940] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:00.712 [2024-06-10 10:08:50.157955] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:00.712 [2024-06-10 10:08:50.157967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:00.712 [2024-06-10 10:08:50.157980] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:00.712 [2024-06-10 10:08:50.157992] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:00.712 [2024-06-10 10:08:50.158012] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:00.712 [2024-06-10 10:08:50.158027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.712 [2024-06-10 10:08:50.158043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:00.712 [2024-06-10 10:08:50.158056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:00.712 [2024-06-10 10:08:50.158070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:00.712 [2024-06-10 10:08:50.158082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:00.712 [2024-06-10 10:08:50.158096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:00.712 [2024-06-10 10:08:50.158109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:00.712 [2024-06-10 10:08:50.158123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:00.712 [2024-06-10 10:08:50.158135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:00.712 [2024-06-10 10:08:50.158149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:00.712 [2024-06-10 10:08:50.158162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:00.712 [2024-06-10 10:08:50.158180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:00.712 [2024-06-10 10:08:50.158192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:00.712 [2024-06-10 10:08:50.158206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:00.712 [2024-06-10 10:08:50.158219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:00.712 [2024-06-10 10:08:50.158233] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:00.712 [2024-06-10 10:08:50.158246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.712 [2024-06-10 10:08:50.158262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:00.712 [2024-06-10 10:08:50.158274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:00.712 [2024-06-10 10:08:50.158288] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:00.712 [2024-06-10 10:08:50.158301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:00.712 [2024-06-10 10:08:50.158316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.712 [2024-06-10 10:08:50.158329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:00.712 [2024-06-10 10:08:50.158343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.143 ms 00:20:00.712 [2024-06-10 10:08:50.158358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.712 [2024-06-10 10:08:50.158407] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:00.712 [2024-06-10 10:08:50.158426] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:03.245 [2024-06-10 10:08:52.304662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.245 [2024-06-10 10:08:52.304923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:03.245 [2024-06-10 10:08:52.305078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2146.259 ms 00:20:03.245 [2024-06-10 10:08:52.305134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.245 [2024-06-10 10:08:52.344973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.245 [2024-06-10 10:08:52.345246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:03.245 [2024-06-10 10:08:52.345392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.458 ms 00:20:03.245 [2024-06-10 10:08:52.345508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.245 [2024-06-10 10:08:52.345766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.245 [2024-06-10 10:08:52.345829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:03.245 [2024-06-10 10:08:52.345948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:03.245 [2024-06-10 10:08:52.346000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.245 [2024-06-10 10:08:52.385913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.245 [2024-06-10 10:08:52.386161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:03.245 [2024-06-10 10:08:52.386293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.814 ms 00:20:03.245 [2024-06-10 10:08:52.386421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.245 [2024-06-10 10:08:52.386526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.245 [2024-06-10 10:08:52.386679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:03.245 [2024-06-10 10:08:52.386742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:03.245 [2024-06-10 10:08:52.386882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.245 [2024-06-10 10:08:52.387411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.245 [2024-06-10 10:08:52.387561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:03.245 [2024-06-10 10:08:52.387703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:20:03.245 [2024-06-10 10:08:52.387811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.245 [2024-06-10 10:08:52.388061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.245 [2024-06-10 10:08:52.388184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:03.245 [2024-06-10 10:08:52.388316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:20:03.245 [2024-06-10 10:08:52.388443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.245 [2024-06-10 10:08:52.405025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.245 [2024-06-10 10:08:52.405204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:03.245 [2024-06-10 10:08:52.405239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.506 ms 00:20:03.245 [2024-06-10 10:08:52.405254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.245 [2024-06-10 10:08:52.419385] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:20:03.245 [2024-06-10 10:08:52.424825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.245 [2024-06-10 10:08:52.424883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:03.245 [2024-06-10 10:08:52.424905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.464 ms 00:20:03.245 [2024-06-10 10:08:52.424920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.245 [2024-06-10 10:08:52.485404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.245 [2024-06-10 10:08:52.485493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:03.245 [2024-06-10 10:08:52.485514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.426 ms 00:20:03.245 [2024-06-10 10:08:52.485541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.245 [2024-06-10 10:08:52.485805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.245 [2024-06-10 10:08:52.485830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:03.245 [2024-06-10 10:08:52.485844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:20:03.245 [2024-06-10 10:08:52.485873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.245 [2024-06-10 10:08:52.518329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.245 [2024-06-10 10:08:52.518396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:03.245 [2024-06-10 10:08:52.518415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.369 ms 00:20:03.245 [2024-06-10 10:08:52.518430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.245 [2024-06-10 10:08:52.550345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.245 [2024-06-10 10:08:52.550407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:03.245 [2024-06-10 10:08:52.550428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.865 ms 00:20:03.245 [2024-06-10 10:08:52.550444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.245 [2024-06-10 10:08:52.551217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.245 [2024-06-10 10:08:52.551260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:03.245 [2024-06-10 10:08:52.551277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:20:03.245 [2024-06-10 10:08:52.551292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.245 [2024-06-10 10:08:52.642602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.245 [2024-06-10 10:08:52.642694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:03.245 [2024-06-10 10:08:52.642721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.244 ms 00:20:03.245 [2024-06-10 10:08:52.642741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.245 [2024-06-10 10:08:52.678823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.245 [2024-06-10 10:08:52.678942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:03.245 [2024-06-10 10:08:52.678991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.020 ms 00:20:03.245 [2024-06-10 10:08:52.679007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.245 [2024-06-10 10:08:52.712743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.245 [2024-06-10 10:08:52.712803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:03.246 [2024-06-10 10:08:52.712838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.676 ms 00:20:03.246 [2024-06-10 10:08:52.712851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.246 [2024-06-10 10:08:52.745636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.246 [2024-06-10 10:08:52.745722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:03.246 [2024-06-10 10:08:52.745744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.720 ms 00:20:03.246 [2024-06-10 10:08:52.745758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.246 [2024-06-10 10:08:52.745830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.246 [2024-06-10 10:08:52.745857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:03.246 [2024-06-10 10:08:52.745872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:03.246 [2024-06-10 10:08:52.745903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.246 [2024-06-10 10:08:52.746035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.246 [2024-06-10 10:08:52.746059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:03.246 [2024-06-10 10:08:52.746074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:03.246 [2024-06-10 10:08:52.746089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.246 [2024-06-10 10:08:52.747119] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2602.722 ms, result 0 00:20:03.246 { 00:20:03.246 "name": "ftl0", 00:20:03.246 "uuid": "4fb3fa36-47ee-4292-9202-3c455a832c40" 00:20:03.246 } 00:20:03.505 10:08:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:20:03.505 10:08:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:20:03.505 10:08:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:20:03.762 10:08:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:20:03.762 [2024-06-10 10:08:53.191739] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:03.762 I/O size of 69632 is greater than zero copy threshold (65536). 00:20:03.762 Zero copy mechanism will not be used. 00:20:03.762 Running I/O for 4 seconds... 00:20:07.964 00:20:07.964 Latency(us) 00:20:07.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.964 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:20:07.964 ftl0 : 4.00 1770.06 117.54 0.00 0.00 592.30 236.45 1489.45 00:20:07.964 =================================================================================================================== 00:20:07.964 Total : 1770.06 117.54 0.00 0.00 592.30 236.45 1489.45 00:20:07.964 [2024-06-10 10:08:57.202795] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:07.964 0 00:20:07.964 10:08:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:20:07.964 [2024-06-10 10:08:57.338526] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:07.964 Running I/O for 4 seconds... 00:20:12.170 00:20:12.170 Latency(us) 00:20:12.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.170 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:20:12.170 ftl0 : 4.02 7478.42 29.21 0.00 0.00 17073.40 316.51 32887.16 00:20:12.170 =================================================================================================================== 00:20:12.170 Total : 7478.42 29.21 0.00 0.00 17073.40 0.00 32887.16 00:20:12.170 [2024-06-10 10:09:01.366793] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:12.170 0 00:20:12.170 10:09:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:20:12.170 [2024-06-10 10:09:01.506787] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:12.170 Running I/O for 4 seconds... 00:20:16.357 00:20:16.357 Latency(us) 00:20:16.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.357 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:16.357 Verification LBA range: start 0x0 length 0x1400000 00:20:16.357 ftl0 : 4.01 5654.40 22.09 0.00 0.00 22554.14 368.64 27286.81 00:20:16.357 =================================================================================================================== 00:20:16.357 Total : 5654.40 22.09 0.00 0.00 22554.14 0.00 27286.81 00:20:16.357 0 00:20:16.357 [2024-06-10 10:09:05.540023] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:16.357 10:09:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:20:16.357 [2024-06-10 10:09:05.826020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.357 [2024-06-10 10:09:05.826092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:16.357 [2024-06-10 10:09:05.826117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:16.357 [2024-06-10 10:09:05.826133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.357 [2024-06-10 10:09:05.826170] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:16.357 [2024-06-10 10:09:05.829781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.357 [2024-06-10 10:09:05.829821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:16.357 [2024-06-10 10:09:05.829844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.578 ms 00:20:16.357 [2024-06-10 10:09:05.829857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.357 [2024-06-10 10:09:05.831327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.358 [2024-06-10 10:09:05.831374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:16.358 [2024-06-10 10:09:05.831397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:20:16.358 [2024-06-10 10:09:05.831411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.615 [2024-06-10 10:09:06.019051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.615 [2024-06-10 10:09:06.019130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:16.615 [2024-06-10 10:09:06.019173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 187.599 ms 00:20:16.615 [2024-06-10 10:09:06.019187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.615 [2024-06-10 10:09:06.026004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.615 [2024-06-10 10:09:06.026043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:16.615 [2024-06-10 10:09:06.026063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.759 ms 00:20:16.615 [2024-06-10 10:09:06.026076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.615 [2024-06-10 10:09:06.058958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.615 [2024-06-10 10:09:06.059040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:16.615 [2024-06-10 10:09:06.059065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.757 ms 00:20:16.616 [2024-06-10 10:09:06.059078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.616 [2024-06-10 10:09:06.078501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.616 [2024-06-10 10:09:06.078561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:16.616 [2024-06-10 10:09:06.078602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.330 ms 00:20:16.616 [2024-06-10 10:09:06.078615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.616 [2024-06-10 10:09:06.078893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.616 [2024-06-10 10:09:06.078918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:16.616 [2024-06-10 10:09:06.078936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:20:16.616 [2024-06-10 10:09:06.078949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.616 [2024-06-10 10:09:06.110060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.616 [2024-06-10 10:09:06.110114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:16.616 [2024-06-10 10:09:06.110154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.079 ms 00:20:16.616 [2024-06-10 10:09:06.110167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.875 [2024-06-10 10:09:06.142392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.875 [2024-06-10 10:09:06.142454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:16.875 [2024-06-10 10:09:06.142494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.166 ms 00:20:16.875 [2024-06-10 10:09:06.142507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.875 [2024-06-10 10:09:06.175347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.875 [2024-06-10 10:09:06.175416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:16.875 [2024-06-10 10:09:06.175441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.773 ms 00:20:16.875 [2024-06-10 10:09:06.175455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.875 [2024-06-10 10:09:06.208063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.875 [2024-06-10 10:09:06.208141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:16.875 [2024-06-10 10:09:06.208177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.435 ms 00:20:16.875 [2024-06-10 10:09:06.208190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.875 [2024-06-10 10:09:06.208278] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:16.875 [2024-06-10 10:09:06.208309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:16.875 [2024-06-10 10:09:06.208693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.208757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.208775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.208790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.208805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.208818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.208849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.208863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.208881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.208895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.208910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.208923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.208939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.208953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.208968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.208981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.208996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:16.876 [2024-06-10 10:09:06.209651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:16.877 [2024-06-10 10:09:06.209664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:16.877 [2024-06-10 10:09:06.209695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:16.877 [2024-06-10 10:09:06.209709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:16.877 [2024-06-10 10:09:06.209726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:16.877 [2024-06-10 10:09:06.209739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:16.877 [2024-06-10 10:09:06.209755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:16.877 [2024-06-10 10:09:06.209767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:16.877 [2024-06-10 10:09:06.209783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:16.877 [2024-06-10 10:09:06.209796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:16.877 [2024-06-10 10:09:06.209811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:16.877 [2024-06-10 10:09:06.209824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:16.877 [2024-06-10 10:09:06.209842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:16.877 [2024-06-10 10:09:06.209856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:16.877 [2024-06-10 10:09:06.209871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:16.877 [2024-06-10 10:09:06.209885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:16.877 [2024-06-10 10:09:06.209900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:16.877 [2024-06-10 10:09:06.209924] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:16.877 [2024-06-10 10:09:06.209939] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4fb3fa36-47ee-4292-9202-3c455a832c40 00:20:16.877 [2024-06-10 10:09:06.209951] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:16.877 [2024-06-10 10:09:06.209966] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:16.877 [2024-06-10 10:09:06.209980] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:16.877 [2024-06-10 10:09:06.209994] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:16.877 [2024-06-10 10:09:06.210006] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:16.877 [2024-06-10 10:09:06.210025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:16.877 [2024-06-10 10:09:06.210037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:16.877 [2024-06-10 10:09:06.210050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:16.877 [2024-06-10 10:09:06.210061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:16.877 [2024-06-10 10:09:06.210078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.877 [2024-06-10 10:09:06.210091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:16.877 [2024-06-10 10:09:06.210107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.805 ms 00:20:16.877 [2024-06-10 10:09:06.210119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.877 [2024-06-10 10:09:06.228144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.877 [2024-06-10 10:09:06.228205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:16.877 [2024-06-10 10:09:06.228230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.926 ms 00:20:16.877 [2024-06-10 10:09:06.228245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.877 [2024-06-10 10:09:06.228781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.877 [2024-06-10 10:09:06.228814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:16.877 [2024-06-10 10:09:06.228833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:20:16.877 [2024-06-10 10:09:06.228846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.877 [2024-06-10 10:09:06.270175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.877 [2024-06-10 10:09:06.270230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:16.877 [2024-06-10 10:09:06.270271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.877 [2024-06-10 10:09:06.270285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.877 [2024-06-10 10:09:06.270413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.877 [2024-06-10 10:09:06.270430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:16.877 [2024-06-10 10:09:06.270444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.877 [2024-06-10 10:09:06.270456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.877 [2024-06-10 10:09:06.270610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.877 [2024-06-10 10:09:06.270630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:16.877 [2024-06-10 10:09:06.270646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.877 [2024-06-10 10:09:06.270659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.877 [2024-06-10 10:09:06.270686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.877 [2024-06-10 10:09:06.270702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:16.877 [2024-06-10 10:09:06.270716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.877 [2024-06-10 10:09:06.270749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.877 [2024-06-10 10:09:06.369085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.877 [2024-06-10 10:09:06.369155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.877 [2024-06-10 10:09:06.369179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.877 [2024-06-10 10:09:06.369195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.136 [2024-06-10 10:09:06.454187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.136 [2024-06-10 10:09:06.454256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.136 [2024-06-10 10:09:06.454295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.136 [2024-06-10 10:09:06.454308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.136 [2024-06-10 10:09:06.454434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.136 [2024-06-10 10:09:06.454454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.136 [2024-06-10 10:09:06.454469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.136 [2024-06-10 10:09:06.454482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.136 [2024-06-10 10:09:06.454548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.136 [2024-06-10 10:09:06.454567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.136 [2024-06-10 10:09:06.454583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.136 [2024-06-10 10:09:06.454594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.136 [2024-06-10 10:09:06.454766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.136 [2024-06-10 10:09:06.454788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.136 [2024-06-10 10:09:06.454821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.136 [2024-06-10 10:09:06.454834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.136 [2024-06-10 10:09:06.454897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.136 [2024-06-10 10:09:06.454920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:17.136 [2024-06-10 10:09:06.454936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.136 [2024-06-10 10:09:06.454949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.136 [2024-06-10 10:09:06.455001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.136 [2024-06-10 10:09:06.455019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.136 [2024-06-10 10:09:06.455034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.136 [2024-06-10 10:09:06.455046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.136 [2024-06-10 10:09:06.455108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.136 [2024-06-10 10:09:06.455127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.136 [2024-06-10 10:09:06.455157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.136 [2024-06-10 10:09:06.455171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.136 [2024-06-10 10:09:06.455325] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 629.266 ms, result 0 00:20:17.136 true 00:20:17.136 10:09:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 80103 00:20:17.136 10:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 80103 ']' 00:20:17.136 10:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # kill -0 80103 00:20:17.136 10:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # uname 00:20:17.136 10:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:17.136 10:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 80103 00:20:17.136 killing process with pid 80103 00:20:17.136 Received shutdown signal, test time was about 4.000000 seconds 00:20:17.136 00:20:17.136 Latency(us) 00:20:17.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.136 =================================================================================================================== 00:20:17.136 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.136 10:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:17.136 10:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:17.136 10:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 80103' 00:20:17.136 10:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # kill 80103 00:20:17.136 10:09:06 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # wait 80103 00:20:21.321 10:09:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:20:21.321 10:09:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:20:21.321 10:09:09 ftl.ftl_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:21.321 10:09:09 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:21.321 Remove shared memory files 00:20:21.321 10:09:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:20:21.321 10:09:09 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:21.321 10:09:09 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:20:21.321 10:09:09 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:20:21.321 10:09:09 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:20:21.321 10:09:09 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:21.321 10:09:09 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:20:21.321 ************************************ 00:20:21.321 END TEST ftl_bdevperf 00:20:21.321 ************************************ 00:20:21.321 00:20:21.321 real 0m24.953s 00:20:21.321 user 0m28.854s 00:20:21.321 sys 0m1.151s 00:20:21.321 10:09:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:21.321 10:09:09 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:21.321 10:09:10 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:21.321 10:09:10 ftl -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:20:21.321 10:09:10 ftl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:21.321 10:09:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:21.321 ************************************ 00:20:21.321 START TEST ftl_trim 00:20:21.321 ************************************ 00:20:21.321 10:09:10 ftl.ftl_trim -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:21.321 * Looking for test storage... 00:20:21.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=80461 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:20:21.321 10:09:10 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 80461 00:20:21.321 10:09:10 ftl.ftl_trim -- common/autotest_common.sh@830 -- # '[' -z 80461 ']' 00:20:21.321 10:09:10 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.321 10:09:10 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:21.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.321 10:09:10 ftl.ftl_trim -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.321 10:09:10 ftl.ftl_trim -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:21.321 10:09:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:21.321 [2024-06-10 10:09:10.272448] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:21.321 [2024-06-10 10:09:10.272626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80461 ] 00:20:21.321 [2024-06-10 10:09:10.448230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:21.321 [2024-06-10 10:09:10.683289] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.321 [2024-06-10 10:09:10.683374] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.321 [2024-06-10 10:09:10.683381] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.258 10:09:11 ftl.ftl_trim -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:22.258 10:09:11 ftl.ftl_trim -- common/autotest_common.sh@863 -- # return 0 00:20:22.258 10:09:11 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:22.258 10:09:11 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:20:22.259 10:09:11 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:22.259 10:09:11 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:20:22.259 10:09:11 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:20:22.259 10:09:11 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:22.259 10:09:11 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:22.259 10:09:11 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:20:22.259 10:09:11 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:22.259 10:09:11 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local bdev_name=nvme0n1 00:20:22.259 10:09:11 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_info 00:20:22.259 10:09:11 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bs 00:20:22.259 10:09:11 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local nb 00:20:22.259 10:09:11 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:22.517 10:09:12 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:20:22.517 { 00:20:22.517 "name": "nvme0n1", 00:20:22.517 "aliases": [ 00:20:22.517 "408180c9-d784-4c0b-a295-5d8becae74a2" 00:20:22.517 ], 00:20:22.517 "product_name": "NVMe disk", 00:20:22.517 "block_size": 4096, 00:20:22.517 "num_blocks": 1310720, 00:20:22.517 "uuid": "408180c9-d784-4c0b-a295-5d8becae74a2", 00:20:22.517 "assigned_rate_limits": { 00:20:22.517 "rw_ios_per_sec": 0, 00:20:22.517 "rw_mbytes_per_sec": 0, 00:20:22.517 "r_mbytes_per_sec": 0, 00:20:22.517 "w_mbytes_per_sec": 0 00:20:22.517 }, 00:20:22.517 "claimed": true, 00:20:22.517 "claim_type": "read_many_write_one", 00:20:22.517 "zoned": false, 00:20:22.517 "supported_io_types": { 00:20:22.517 "read": true, 00:20:22.517 "write": true, 00:20:22.517 "unmap": true, 00:20:22.517 "write_zeroes": true, 00:20:22.517 "flush": true, 00:20:22.517 "reset": true, 00:20:22.517 "compare": true, 00:20:22.517 "compare_and_write": false, 00:20:22.517 "abort": true, 00:20:22.517 "nvme_admin": true, 00:20:22.517 "nvme_io": true 00:20:22.517 }, 00:20:22.517 "driver_specific": { 00:20:22.517 "nvme": [ 00:20:22.517 { 00:20:22.517 "pci_address": "0000:00:11.0", 00:20:22.518 "trid": { 00:20:22.518 "trtype": "PCIe", 00:20:22.518 "traddr": "0000:00:11.0" 00:20:22.518 }, 00:20:22.518 "ctrlr_data": { 00:20:22.518 "cntlid": 0, 00:20:22.518 "vendor_id": "0x1b36", 00:20:22.518 "model_number": "QEMU NVMe Ctrl", 00:20:22.518 "serial_number": "12341", 00:20:22.518 "firmware_revision": "8.0.0", 00:20:22.518 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:22.518 "oacs": { 00:20:22.518 "security": 0, 00:20:22.518 "format": 1, 00:20:22.518 "firmware": 0, 00:20:22.518 "ns_manage": 1 00:20:22.518 }, 00:20:22.518 "multi_ctrlr": false, 00:20:22.518 "ana_reporting": false 00:20:22.518 }, 00:20:22.518 "vs": { 00:20:22.518 "nvme_version": "1.4" 00:20:22.518 }, 00:20:22.518 "ns_data": { 00:20:22.518 "id": 1, 00:20:22.518 "can_share": false 00:20:22.518 } 00:20:22.518 } 00:20:22.518 ], 00:20:22.518 "mp_policy": "active_passive" 00:20:22.518 } 00:20:22.518 } 00:20:22.518 ]' 00:20:22.518 10:09:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:20:22.782 10:09:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bs=4096 00:20:22.782 10:09:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:20:22.782 10:09:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # nb=1310720 00:20:22.782 10:09:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_size=5120 00:20:22.782 10:09:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # echo 5120 00:20:22.782 10:09:12 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:20:22.782 10:09:12 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:22.782 10:09:12 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:20:22.782 10:09:12 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:22.782 10:09:12 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:23.041 10:09:12 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=8ebdbdf1-4241-43f9-89ed-a8ef7f282648 00:20:23.041 10:09:12 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:20:23.041 10:09:12 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8ebdbdf1-4241-43f9-89ed-a8ef7f282648 00:20:23.299 10:09:12 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:23.556 10:09:13 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=fbc15256-e89f-4972-b87d-64f005cc9399 00:20:23.556 10:09:13 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fbc15256-e89f-4972-b87d-64f005cc9399 00:20:23.815 10:09:13 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=7c8dada8-68f4-4539-827c-41c723847362 00:20:23.815 10:09:13 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7c8dada8-68f4-4539-827c-41c723847362 00:20:23.815 10:09:13 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:20:23.815 10:09:13 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:23.815 10:09:13 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=7c8dada8-68f4-4539-827c-41c723847362 00:20:23.815 10:09:13 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:20:23.815 10:09:13 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 7c8dada8-68f4-4539-827c-41c723847362 00:20:23.815 10:09:13 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local bdev_name=7c8dada8-68f4-4539-827c-41c723847362 00:20:23.815 10:09:13 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_info 00:20:23.815 10:09:13 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bs 00:20:23.815 10:09:13 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local nb 00:20:23.815 10:09:13 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7c8dada8-68f4-4539-827c-41c723847362 00:20:24.072 10:09:13 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:20:24.072 { 00:20:24.072 "name": "7c8dada8-68f4-4539-827c-41c723847362", 00:20:24.072 "aliases": [ 00:20:24.072 "lvs/nvme0n1p0" 00:20:24.072 ], 00:20:24.072 "product_name": "Logical Volume", 00:20:24.072 "block_size": 4096, 00:20:24.072 "num_blocks": 26476544, 00:20:24.072 "uuid": "7c8dada8-68f4-4539-827c-41c723847362", 00:20:24.072 "assigned_rate_limits": { 00:20:24.072 "rw_ios_per_sec": 0, 00:20:24.072 "rw_mbytes_per_sec": 0, 00:20:24.072 "r_mbytes_per_sec": 0, 00:20:24.072 "w_mbytes_per_sec": 0 00:20:24.072 }, 00:20:24.072 "claimed": false, 00:20:24.072 "zoned": false, 00:20:24.072 "supported_io_types": { 00:20:24.072 "read": true, 00:20:24.072 "write": true, 00:20:24.072 "unmap": true, 00:20:24.072 "write_zeroes": true, 00:20:24.072 "flush": false, 00:20:24.072 "reset": true, 00:20:24.072 "compare": false, 00:20:24.072 "compare_and_write": false, 00:20:24.072 "abort": false, 00:20:24.072 "nvme_admin": false, 00:20:24.072 "nvme_io": false 00:20:24.072 }, 00:20:24.072 "driver_specific": { 00:20:24.072 "lvol": { 00:20:24.072 "lvol_store_uuid": "fbc15256-e89f-4972-b87d-64f005cc9399", 00:20:24.072 "base_bdev": "nvme0n1", 00:20:24.072 "thin_provision": true, 00:20:24.072 "num_allocated_clusters": 0, 00:20:24.072 "snapshot": false, 00:20:24.072 "clone": false, 00:20:24.072 "esnap_clone": false 00:20:24.072 } 00:20:24.072 } 00:20:24.072 } 00:20:24.072 ]' 00:20:24.072 10:09:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:20:24.072 10:09:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bs=4096 00:20:24.072 10:09:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:20:24.330 10:09:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # nb=26476544 00:20:24.330 10:09:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_size=103424 00:20:24.330 10:09:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # echo 103424 00:20:24.330 10:09:13 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:20:24.330 10:09:13 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:20:24.330 10:09:13 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:24.588 10:09:13 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:24.588 10:09:13 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:24.588 10:09:13 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 7c8dada8-68f4-4539-827c-41c723847362 00:20:24.588 10:09:13 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local bdev_name=7c8dada8-68f4-4539-827c-41c723847362 00:20:24.588 10:09:13 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_info 00:20:24.588 10:09:13 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bs 00:20:24.588 10:09:13 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local nb 00:20:24.588 10:09:13 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7c8dada8-68f4-4539-827c-41c723847362 00:20:24.846 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:20:24.846 { 00:20:24.846 "name": "7c8dada8-68f4-4539-827c-41c723847362", 00:20:24.846 "aliases": [ 00:20:24.846 "lvs/nvme0n1p0" 00:20:24.846 ], 00:20:24.846 "product_name": "Logical Volume", 00:20:24.846 "block_size": 4096, 00:20:24.846 "num_blocks": 26476544, 00:20:24.846 "uuid": "7c8dada8-68f4-4539-827c-41c723847362", 00:20:24.846 "assigned_rate_limits": { 00:20:24.846 "rw_ios_per_sec": 0, 00:20:24.846 "rw_mbytes_per_sec": 0, 00:20:24.846 "r_mbytes_per_sec": 0, 00:20:24.846 "w_mbytes_per_sec": 0 00:20:24.846 }, 00:20:24.847 "claimed": false, 00:20:24.847 "zoned": false, 00:20:24.847 "supported_io_types": { 00:20:24.847 "read": true, 00:20:24.847 "write": true, 00:20:24.847 "unmap": true, 00:20:24.847 "write_zeroes": true, 00:20:24.847 "flush": false, 00:20:24.847 "reset": true, 00:20:24.847 "compare": false, 00:20:24.847 "compare_and_write": false, 00:20:24.847 "abort": false, 00:20:24.847 "nvme_admin": false, 00:20:24.847 "nvme_io": false 00:20:24.847 }, 00:20:24.847 "driver_specific": { 00:20:24.847 "lvol": { 00:20:24.847 "lvol_store_uuid": "fbc15256-e89f-4972-b87d-64f005cc9399", 00:20:24.847 "base_bdev": "nvme0n1", 00:20:24.847 "thin_provision": true, 00:20:24.847 "num_allocated_clusters": 0, 00:20:24.847 "snapshot": false, 00:20:24.847 "clone": false, 00:20:24.847 "esnap_clone": false 00:20:24.847 } 00:20:24.847 } 00:20:24.847 } 00:20:24.847 ]' 00:20:24.847 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:20:24.847 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bs=4096 00:20:24.847 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:20:24.847 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # nb=26476544 00:20:24.847 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_size=103424 00:20:24.847 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # echo 103424 00:20:24.847 10:09:14 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:20:24.847 10:09:14 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:25.105 10:09:14 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:20:25.105 10:09:14 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:20:25.105 10:09:14 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 7c8dada8-68f4-4539-827c-41c723847362 00:20:25.105 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local bdev_name=7c8dada8-68f4-4539-827c-41c723847362 00:20:25.105 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_info 00:20:25.105 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bs 00:20:25.105 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local nb 00:20:25.105 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7c8dada8-68f4-4539-827c-41c723847362 00:20:25.363 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:20:25.363 { 00:20:25.363 "name": "7c8dada8-68f4-4539-827c-41c723847362", 00:20:25.363 "aliases": [ 00:20:25.363 "lvs/nvme0n1p0" 00:20:25.363 ], 00:20:25.363 "product_name": "Logical Volume", 00:20:25.363 "block_size": 4096, 00:20:25.363 "num_blocks": 26476544, 00:20:25.363 "uuid": "7c8dada8-68f4-4539-827c-41c723847362", 00:20:25.363 "assigned_rate_limits": { 00:20:25.363 "rw_ios_per_sec": 0, 00:20:25.363 "rw_mbytes_per_sec": 0, 00:20:25.363 "r_mbytes_per_sec": 0, 00:20:25.363 "w_mbytes_per_sec": 0 00:20:25.363 }, 00:20:25.363 "claimed": false, 00:20:25.363 "zoned": false, 00:20:25.363 "supported_io_types": { 00:20:25.363 "read": true, 00:20:25.363 "write": true, 00:20:25.363 "unmap": true, 00:20:25.363 "write_zeroes": true, 00:20:25.363 "flush": false, 00:20:25.363 "reset": true, 00:20:25.363 "compare": false, 00:20:25.363 "compare_and_write": false, 00:20:25.363 "abort": false, 00:20:25.363 "nvme_admin": false, 00:20:25.363 "nvme_io": false 00:20:25.363 }, 00:20:25.363 "driver_specific": { 00:20:25.363 "lvol": { 00:20:25.363 "lvol_store_uuid": "fbc15256-e89f-4972-b87d-64f005cc9399", 00:20:25.363 "base_bdev": "nvme0n1", 00:20:25.363 "thin_provision": true, 00:20:25.363 "num_allocated_clusters": 0, 00:20:25.363 "snapshot": false, 00:20:25.363 "clone": false, 00:20:25.363 "esnap_clone": false 00:20:25.363 } 00:20:25.363 } 00:20:25.363 } 00:20:25.363 ]' 00:20:25.363 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:20:25.363 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bs=4096 00:20:25.363 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:20:25.621 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # nb=26476544 00:20:25.621 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_size=103424 00:20:25.621 10:09:14 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # echo 103424 00:20:25.621 10:09:14 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:20:25.621 10:09:14 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7c8dada8-68f4-4539-827c-41c723847362 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:20:25.621 [2024-06-10 10:09:15.127711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.621 [2024-06-10 10:09:15.127766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:25.621 [2024-06-10 10:09:15.127788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:25.621 [2024-06-10 10:09:15.127801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.621 [2024-06-10 10:09:15.131218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.621 [2024-06-10 10:09:15.131261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:25.621 [2024-06-10 10:09:15.131281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.382 ms 00:20:25.621 [2024-06-10 10:09:15.131295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.621 [2024-06-10 10:09:15.131471] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:25.621 [2024-06-10 10:09:15.132444] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:25.621 [2024-06-10 10:09:15.132486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.621 [2024-06-10 10:09:15.132502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:25.621 [2024-06-10 10:09:15.132520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:20:25.621 [2024-06-10 10:09:15.132532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.621 [2024-06-10 10:09:15.132777] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4e93524f-9e0d-42fe-9154-f58916c65969 00:20:25.621 [2024-06-10 10:09:15.133842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.621 [2024-06-10 10:09:15.133882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:25.621 [2024-06-10 10:09:15.133899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:25.621 [2024-06-10 10:09:15.133914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.882 [2024-06-10 10:09:15.138828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.882 [2024-06-10 10:09:15.139027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:25.882 [2024-06-10 10:09:15.139171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.823 ms 00:20:25.882 [2024-06-10 10:09:15.139236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.882 [2024-06-10 10:09:15.139583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.882 [2024-06-10 10:09:15.139765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:25.882 [2024-06-10 10:09:15.139898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:25.882 [2024-06-10 10:09:15.139927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.882 [2024-06-10 10:09:15.139987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.882 [2024-06-10 10:09:15.140010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:25.882 [2024-06-10 10:09:15.140023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:25.882 [2024-06-10 10:09:15.140040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.882 [2024-06-10 10:09:15.140084] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:25.882 [2024-06-10 10:09:15.144804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.882 [2024-06-10 10:09:15.144967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:25.882 [2024-06-10 10:09:15.145099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.726 ms 00:20:25.882 [2024-06-10 10:09:15.145159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.882 [2024-06-10 10:09:15.145382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.882 [2024-06-10 10:09:15.145520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:25.882 [2024-06-10 10:09:15.145651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:25.882 [2024-06-10 10:09:15.145794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.882 [2024-06-10 10:09:15.145894] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:25.882 [2024-06-10 10:09:15.146217] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:25.882 [2024-06-10 10:09:15.146380] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:25.882 [2024-06-10 10:09:15.146521] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:25.883 [2024-06-10 10:09:15.146671] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:25.883 [2024-06-10 10:09:15.146815] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:25.883 [2024-06-10 10:09:15.146952] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:25.883 [2024-06-10 10:09:15.147009] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:25.883 [2024-06-10 10:09:15.147169] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:25.883 [2024-06-10 10:09:15.147234] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:25.883 [2024-06-10 10:09:15.147343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.883 [2024-06-10 10:09:15.147464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:25.883 [2024-06-10 10:09:15.147528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.452 ms 00:20:25.883 [2024-06-10 10:09:15.147627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.883 [2024-06-10 10:09:15.147800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.883 [2024-06-10 10:09:15.147913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:25.883 [2024-06-10 10:09:15.147965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:25.883 [2024-06-10 10:09:15.148011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.883 [2024-06-10 10:09:15.148184] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:25.883 [2024-06-10 10:09:15.148245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:25.883 [2024-06-10 10:09:15.148302] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:25.883 [2024-06-10 10:09:15.148383] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.883 [2024-06-10 10:09:15.148508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:25.883 [2024-06-10 10:09:15.148567] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:25.883 [2024-06-10 10:09:15.148703] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:25.883 [2024-06-10 10:09:15.148763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:25.883 [2024-06-10 10:09:15.148870] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:25.883 [2024-06-10 10:09:15.148928] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:25.883 [2024-06-10 10:09:15.149035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:25.883 [2024-06-10 10:09:15.149169] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:25.883 [2024-06-10 10:09:15.149288] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:25.883 [2024-06-10 10:09:15.149401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:25.883 [2024-06-10 10:09:15.149464] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:25.883 [2024-06-10 10:09:15.149576] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.883 [2024-06-10 10:09:15.149604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:25.883 [2024-06-10 10:09:15.149617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:25.883 [2024-06-10 10:09:15.149632] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.883 [2024-06-10 10:09:15.149657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:25.883 [2024-06-10 10:09:15.149672] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:25.883 [2024-06-10 10:09:15.149683] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.883 [2024-06-10 10:09:15.149696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:25.883 [2024-06-10 10:09:15.149707] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:25.883 [2024-06-10 10:09:15.149720] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.883 [2024-06-10 10:09:15.149730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:25.883 [2024-06-10 10:09:15.149743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:25.883 [2024-06-10 10:09:15.149754] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.883 [2024-06-10 10:09:15.149766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:25.883 [2024-06-10 10:09:15.149778] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:25.883 [2024-06-10 10:09:15.149790] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.883 [2024-06-10 10:09:15.149801] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:25.883 [2024-06-10 10:09:15.149814] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:25.883 [2024-06-10 10:09:15.149825] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:25.883 [2024-06-10 10:09:15.149840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:25.883 [2024-06-10 10:09:15.149851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:25.883 [2024-06-10 10:09:15.149864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:25.883 [2024-06-10 10:09:15.149885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:25.883 [2024-06-10 10:09:15.149900] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:25.883 [2024-06-10 10:09:15.149911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.883 [2024-06-10 10:09:15.149924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:25.883 [2024-06-10 10:09:15.149935] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:25.883 [2024-06-10 10:09:15.149947] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.883 [2024-06-10 10:09:15.149957] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:25.883 [2024-06-10 10:09:15.149972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:25.883 [2024-06-10 10:09:15.149984] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:25.883 [2024-06-10 10:09:15.149997] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.883 [2024-06-10 10:09:15.150009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:25.883 [2024-06-10 10:09:15.150022] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:25.883 [2024-06-10 10:09:15.150033] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:25.883 [2024-06-10 10:09:15.150048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:25.883 [2024-06-10 10:09:15.150058] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:25.883 [2024-06-10 10:09:15.150072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:25.883 [2024-06-10 10:09:15.150089] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:25.883 [2024-06-10 10:09:15.150106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:25.883 [2024-06-10 10:09:15.150122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:25.883 [2024-06-10 10:09:15.150136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:25.883 [2024-06-10 10:09:15.150148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:25.883 [2024-06-10 10:09:15.150162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:25.883 [2024-06-10 10:09:15.150173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:25.883 [2024-06-10 10:09:15.150187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:25.883 [2024-06-10 10:09:15.150199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:25.883 [2024-06-10 10:09:15.150214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:25.883 [2024-06-10 10:09:15.150226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:25.883 [2024-06-10 10:09:15.150239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:25.883 [2024-06-10 10:09:15.150252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:25.883 [2024-06-10 10:09:15.150267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:25.883 [2024-06-10 10:09:15.150279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:25.883 [2024-06-10 10:09:15.150293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:25.883 [2024-06-10 10:09:15.150305] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:25.883 [2024-06-10 10:09:15.150320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:25.883 [2024-06-10 10:09:15.150333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:25.883 [2024-06-10 10:09:15.150346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:25.883 [2024-06-10 10:09:15.150358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:25.883 [2024-06-10 10:09:15.150372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:25.883 [2024-06-10 10:09:15.150386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.883 [2024-06-10 10:09:15.150400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:25.883 [2024-06-10 10:09:15.150413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.271 ms 00:20:25.883 [2024-06-10 10:09:15.150427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.883 [2024-06-10 10:09:15.150553] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:25.883 [2024-06-10 10:09:15.150578] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:27.779 [2024-06-10 10:09:17.226306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.779 [2024-06-10 10:09:17.226380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:27.779 [2024-06-10 10:09:17.226402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2075.766 ms 00:20:27.779 [2024-06-10 10:09:17.226417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.779 [2024-06-10 10:09:17.258190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.779 [2024-06-10 10:09:17.258254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:27.779 [2024-06-10 10:09:17.258274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.437 ms 00:20:27.779 [2024-06-10 10:09:17.258292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.779 [2024-06-10 10:09:17.258470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.779 [2024-06-10 10:09:17.258498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:27.779 [2024-06-10 10:09:17.258514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:27.779 [2024-06-10 10:09:17.258527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.038 [2024-06-10 10:09:17.308951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.038 [2024-06-10 10:09:17.309031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:28.038 [2024-06-10 10:09:17.309053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.381 ms 00:20:28.038 [2024-06-10 10:09:17.309067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.038 [2024-06-10 10:09:17.309196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.038 [2024-06-10 10:09:17.309220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:28.038 [2024-06-10 10:09:17.309235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:28.038 [2024-06-10 10:09:17.309250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.038 [2024-06-10 10:09:17.309573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.038 [2024-06-10 10:09:17.309604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:28.038 [2024-06-10 10:09:17.309619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:20:28.038 [2024-06-10 10:09:17.309632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.038 [2024-06-10 10:09:17.309816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.038 [2024-06-10 10:09:17.309840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:28.038 [2024-06-10 10:09:17.309855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:20:28.038 [2024-06-10 10:09:17.309868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.038 [2024-06-10 10:09:17.329766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.038 [2024-06-10 10:09:17.329821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:28.038 [2024-06-10 10:09:17.329839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.858 ms 00:20:28.038 [2024-06-10 10:09:17.329853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.038 [2024-06-10 10:09:17.343579] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:28.038 [2024-06-10 10:09:17.357683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.038 [2024-06-10 10:09:17.357955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:28.038 [2024-06-10 10:09:17.358089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.648 ms 00:20:28.038 [2024-06-10 10:09:17.358150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.038 [2024-06-10 10:09:17.424279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.038 [2024-06-10 10:09:17.424533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:28.038 [2024-06-10 10:09:17.424691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.865 ms 00:20:28.038 [2024-06-10 10:09:17.424717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.038 [2024-06-10 10:09:17.425007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.038 [2024-06-10 10:09:17.425036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:28.038 [2024-06-10 10:09:17.425054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:20:28.038 [2024-06-10 10:09:17.425066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.038 [2024-06-10 10:09:17.457463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.038 [2024-06-10 10:09:17.457677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:28.038 [2024-06-10 10:09:17.457820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.348 ms 00:20:28.038 [2024-06-10 10:09:17.457876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.038 [2024-06-10 10:09:17.489583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.038 [2024-06-10 10:09:17.489791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:28.038 [2024-06-10 10:09:17.489948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.449 ms 00:20:28.038 [2024-06-10 10:09:17.490102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.038 [2024-06-10 10:09:17.490939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.038 [2024-06-10 10:09:17.491080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:28.038 [2024-06-10 10:09:17.491112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:20:28.038 [2024-06-10 10:09:17.491126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.296 [2024-06-10 10:09:17.579715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.296 [2024-06-10 10:09:17.579964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:28.296 [2024-06-10 10:09:17.580153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.528 ms 00:20:28.296 [2024-06-10 10:09:17.580214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.296 [2024-06-10 10:09:17.613390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.296 [2024-06-10 10:09:17.613690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:28.296 [2024-06-10 10:09:17.613850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.010 ms 00:20:28.296 [2024-06-10 10:09:17.613981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.296 [2024-06-10 10:09:17.646048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.296 [2024-06-10 10:09:17.646229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:28.296 [2024-06-10 10:09:17.646355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.856 ms 00:20:28.296 [2024-06-10 10:09:17.646491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.296 [2024-06-10 10:09:17.677970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.296 [2024-06-10 10:09:17.678153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:28.296 [2024-06-10 10:09:17.678284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.246 ms 00:20:28.296 [2024-06-10 10:09:17.678339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.296 [2024-06-10 10:09:17.678558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.296 [2024-06-10 10:09:17.678597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:28.296 [2024-06-10 10:09:17.678616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:28.297 [2024-06-10 10:09:17.678629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.297 [2024-06-10 10:09:17.678760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.297 [2024-06-10 10:09:17.678785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:28.297 [2024-06-10 10:09:17.678802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:28.297 [2024-06-10 10:09:17.678814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.297 [2024-06-10 10:09:17.680037] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:28.297 [2024-06-10 10:09:17.684434] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2551.801 ms, result 0 00:20:28.297 [2024-06-10 10:09:17.685406] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:28.297 { 00:20:28.297 "name": "ftl0", 00:20:28.297 "uuid": "4e93524f-9e0d-42fe-9154-f58916c65969" 00:20:28.297 } 00:20:28.297 10:09:17 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:28.297 10:09:17 ftl.ftl_trim -- common/autotest_common.sh@898 -- # local bdev_name=ftl0 00:20:28.297 10:09:17 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:28.297 10:09:17 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local i 00:20:28.297 10:09:17 ftl.ftl_trim -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:28.297 10:09:17 ftl.ftl_trim -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:28.297 10:09:17 ftl.ftl_trim -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:28.555 10:09:17 ftl.ftl_trim -- common/autotest_common.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:28.814 [ 00:20:28.814 { 00:20:28.814 "name": "ftl0", 00:20:28.814 "aliases": [ 00:20:28.814 "4e93524f-9e0d-42fe-9154-f58916c65969" 00:20:28.814 ], 00:20:28.814 "product_name": "FTL disk", 00:20:28.814 "block_size": 4096, 00:20:28.814 "num_blocks": 23592960, 00:20:28.814 "uuid": "4e93524f-9e0d-42fe-9154-f58916c65969", 00:20:28.814 "assigned_rate_limits": { 00:20:28.814 "rw_ios_per_sec": 0, 00:20:28.814 "rw_mbytes_per_sec": 0, 00:20:28.814 "r_mbytes_per_sec": 0, 00:20:28.814 "w_mbytes_per_sec": 0 00:20:28.814 }, 00:20:28.814 "claimed": false, 00:20:28.814 "zoned": false, 00:20:28.814 "supported_io_types": { 00:20:28.814 "read": true, 00:20:28.814 "write": true, 00:20:28.814 "unmap": true, 00:20:28.814 "write_zeroes": true, 00:20:28.814 "flush": true, 00:20:28.814 "reset": false, 00:20:28.814 "compare": false, 00:20:28.814 "compare_and_write": false, 00:20:28.814 "abort": false, 00:20:28.814 "nvme_admin": false, 00:20:28.814 "nvme_io": false 00:20:28.814 }, 00:20:28.814 "driver_specific": { 00:20:28.814 "ftl": { 00:20:28.814 "base_bdev": "7c8dada8-68f4-4539-827c-41c723847362", 00:20:28.814 "cache": "nvc0n1p0" 00:20:28.814 } 00:20:28.814 } 00:20:28.814 } 00:20:28.814 ] 00:20:28.814 10:09:18 ftl.ftl_trim -- common/autotest_common.sh@906 -- # return 0 00:20:28.814 10:09:18 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:28.814 10:09:18 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:29.072 10:09:18 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:29.072 10:09:18 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:29.330 10:09:18 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:29.330 { 00:20:29.330 "name": "ftl0", 00:20:29.330 "aliases": [ 00:20:29.330 "4e93524f-9e0d-42fe-9154-f58916c65969" 00:20:29.330 ], 00:20:29.330 "product_name": "FTL disk", 00:20:29.330 "block_size": 4096, 00:20:29.330 "num_blocks": 23592960, 00:20:29.330 "uuid": "4e93524f-9e0d-42fe-9154-f58916c65969", 00:20:29.330 "assigned_rate_limits": { 00:20:29.330 "rw_ios_per_sec": 0, 00:20:29.330 "rw_mbytes_per_sec": 0, 00:20:29.330 "r_mbytes_per_sec": 0, 00:20:29.330 "w_mbytes_per_sec": 0 00:20:29.330 }, 00:20:29.330 "claimed": false, 00:20:29.330 "zoned": false, 00:20:29.330 "supported_io_types": { 00:20:29.330 "read": true, 00:20:29.330 "write": true, 00:20:29.330 "unmap": true, 00:20:29.330 "write_zeroes": true, 00:20:29.330 "flush": true, 00:20:29.330 "reset": false, 00:20:29.330 "compare": false, 00:20:29.330 "compare_and_write": false, 00:20:29.330 "abort": false, 00:20:29.330 "nvme_admin": false, 00:20:29.330 "nvme_io": false 00:20:29.330 }, 00:20:29.330 "driver_specific": { 00:20:29.330 "ftl": { 00:20:29.330 "base_bdev": "7c8dada8-68f4-4539-827c-41c723847362", 00:20:29.330 "cache": "nvc0n1p0" 00:20:29.330 } 00:20:29.330 } 00:20:29.330 } 00:20:29.330 ]' 00:20:29.330 10:09:18 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:29.330 10:09:18 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:29.330 10:09:18 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:29.589 [2024-06-10 10:09:19.025999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.589 [2024-06-10 10:09:19.026066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:29.589 [2024-06-10 10:09:19.026087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:29.589 [2024-06-10 10:09:19.026115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.589 [2024-06-10 10:09:19.026160] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:29.589 [2024-06-10 10:09:19.029494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.589 [2024-06-10 10:09:19.029526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:29.589 [2024-06-10 10:09:19.029544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.307 ms 00:20:29.589 [2024-06-10 10:09:19.029556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.589 [2024-06-10 10:09:19.030195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.589 [2024-06-10 10:09:19.030226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:29.589 [2024-06-10 10:09:19.030245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:20:29.589 [2024-06-10 10:09:19.030257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.589 [2024-06-10 10:09:19.034123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.589 [2024-06-10 10:09:19.034267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:29.589 [2024-06-10 10:09:19.034398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.819 ms 00:20:29.589 [2024-06-10 10:09:19.034457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.589 [2024-06-10 10:09:19.042153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.589 [2024-06-10 10:09:19.042295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:29.589 [2024-06-10 10:09:19.042424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.542 ms 00:20:29.589 [2024-06-10 10:09:19.042553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.589 [2024-06-10 10:09:19.074116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.589 [2024-06-10 10:09:19.074335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:29.589 [2024-06-10 10:09:19.074471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.381 ms 00:20:29.589 [2024-06-10 10:09:19.074531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.589 [2024-06-10 10:09:19.093483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.589 [2024-06-10 10:09:19.093685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:29.589 [2024-06-10 10:09:19.093832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.652 ms 00:20:29.589 [2024-06-10 10:09:19.093895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.589 [2024-06-10 10:09:19.094312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.589 [2024-06-10 10:09:19.094455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:29.589 [2024-06-10 10:09:19.094575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:20:29.589 [2024-06-10 10:09:19.094705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.848 [2024-06-10 10:09:19.126172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.848 [2024-06-10 10:09:19.126358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:29.848 [2024-06-10 10:09:19.126499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.376 ms 00:20:29.848 [2024-06-10 10:09:19.126557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.848 [2024-06-10 10:09:19.157902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.848 [2024-06-10 10:09:19.158151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:29.848 [2024-06-10 10:09:19.158280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.005 ms 00:20:29.848 [2024-06-10 10:09:19.158333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.848 [2024-06-10 10:09:19.189382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.848 [2024-06-10 10:09:19.189428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:29.848 [2024-06-10 10:09:19.189450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.802 ms 00:20:29.848 [2024-06-10 10:09:19.189462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.848 [2024-06-10 10:09:19.220176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.848 [2024-06-10 10:09:19.220230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:29.848 [2024-06-10 10:09:19.220252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.561 ms 00:20:29.848 [2024-06-10 10:09:19.220265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.848 [2024-06-10 10:09:19.220364] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:29.848 [2024-06-10 10:09:19.220390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:29.848 [2024-06-10 10:09:19.220409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:29.848 [2024-06-10 10:09:19.220422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:29.848 [2024-06-10 10:09:19.220437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:29.848 [2024-06-10 10:09:19.220449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:29.848 [2024-06-10 10:09:19.220463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:29.848 [2024-06-10 10:09:19.220476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:29.848 [2024-06-10 10:09:19.220493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:29.848 [2024-06-10 10:09:19.220506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:29.848 [2024-06-10 10:09:19.220520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:29.848 [2024-06-10 10:09:19.220532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:29.848 [2024-06-10 10:09:19.220547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.220998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:29.849 [2024-06-10 10:09:19.221851] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:29.849 [2024-06-10 10:09:19.221865] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4e93524f-9e0d-42fe-9154-f58916c65969 00:20:29.849 [2024-06-10 10:09:19.221878] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:29.849 [2024-06-10 10:09:19.221893] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:29.849 [2024-06-10 10:09:19.221909] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:29.849 [2024-06-10 10:09:19.221925] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:29.849 [2024-06-10 10:09:19.221936] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:29.849 [2024-06-10 10:09:19.221949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:29.849 [2024-06-10 10:09:19.221963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:29.849 [2024-06-10 10:09:19.221976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:29.849 [2024-06-10 10:09:19.221986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:29.849 [2024-06-10 10:09:19.221999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.849 [2024-06-10 10:09:19.222011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:29.849 [2024-06-10 10:09:19.222025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.641 ms 00:20:29.849 [2024-06-10 10:09:19.222037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.849 [2024-06-10 10:09:19.238825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.849 [2024-06-10 10:09:19.238866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:29.849 [2024-06-10 10:09:19.238885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.745 ms 00:20:29.849 [2024-06-10 10:09:19.238898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.849 [2024-06-10 10:09:19.239417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.849 [2024-06-10 10:09:19.239442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:29.849 [2024-06-10 10:09:19.239459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:20:29.849 [2024-06-10 10:09:19.239471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.849 [2024-06-10 10:09:19.297458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.849 [2024-06-10 10:09:19.297524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.849 [2024-06-10 10:09:19.297546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.849 [2024-06-10 10:09:19.297559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.849 [2024-06-10 10:09:19.297733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.849 [2024-06-10 10:09:19.297754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.849 [2024-06-10 10:09:19.297770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.849 [2024-06-10 10:09:19.297782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.849 [2024-06-10 10:09:19.297870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.849 [2024-06-10 10:09:19.297889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.849 [2024-06-10 10:09:19.297907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.849 [2024-06-10 10:09:19.297918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.849 [2024-06-10 10:09:19.297961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.849 [2024-06-10 10:09:19.297976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.849 [2024-06-10 10:09:19.297991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.849 [2024-06-10 10:09:19.298002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.108 [2024-06-10 10:09:19.403402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.108 [2024-06-10 10:09:19.403474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:30.108 [2024-06-10 10:09:19.403497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.108 [2024-06-10 10:09:19.403509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.108 [2024-06-10 10:09:19.487568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.108 [2024-06-10 10:09:19.487630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:30.108 [2024-06-10 10:09:19.487677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.108 [2024-06-10 10:09:19.487692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.108 [2024-06-10 10:09:19.487812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.108 [2024-06-10 10:09:19.487831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:30.108 [2024-06-10 10:09:19.487848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.108 [2024-06-10 10:09:19.487863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.108 [2024-06-10 10:09:19.487925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.108 [2024-06-10 10:09:19.487940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:30.108 [2024-06-10 10:09:19.487954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.108 [2024-06-10 10:09:19.487966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.108 [2024-06-10 10:09:19.488110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.108 [2024-06-10 10:09:19.488135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:30.108 [2024-06-10 10:09:19.488151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.108 [2024-06-10 10:09:19.488163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.108 [2024-06-10 10:09:19.488261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.108 [2024-06-10 10:09:19.488283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:30.108 [2024-06-10 10:09:19.488300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.108 [2024-06-10 10:09:19.488312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.108 [2024-06-10 10:09:19.488373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.108 [2024-06-10 10:09:19.488389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:30.108 [2024-06-10 10:09:19.488403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.108 [2024-06-10 10:09:19.488415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.108 [2024-06-10 10:09:19.488494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.108 [2024-06-10 10:09:19.488512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:30.108 [2024-06-10 10:09:19.488526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.108 [2024-06-10 10:09:19.488537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.108 [2024-06-10 10:09:19.488768] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 462.756 ms, result 0 00:20:30.108 true 00:20:30.108 10:09:19 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 80461 00:20:30.108 10:09:19 ftl.ftl_trim -- common/autotest_common.sh@949 -- # '[' -z 80461 ']' 00:20:30.108 10:09:19 ftl.ftl_trim -- common/autotest_common.sh@953 -- # kill -0 80461 00:20:30.108 10:09:19 ftl.ftl_trim -- common/autotest_common.sh@954 -- # uname 00:20:30.108 10:09:19 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:30.108 10:09:19 ftl.ftl_trim -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 80461 00:20:30.108 10:09:19 ftl.ftl_trim -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:30.108 10:09:19 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:30.108 killing process with pid 80461 00:20:30.108 10:09:19 ftl.ftl_trim -- common/autotest_common.sh@967 -- # echo 'killing process with pid 80461' 00:20:30.108 10:09:19 ftl.ftl_trim -- common/autotest_common.sh@968 -- # kill 80461 00:20:30.108 10:09:19 ftl.ftl_trim -- common/autotest_common.sh@973 -- # wait 80461 00:20:34.314 10:09:23 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:35.685 65536+0 records in 00:20:35.685 65536+0 records out 00:20:35.685 268435456 bytes (268 MB, 256 MiB) copied, 1.14705 s, 234 MB/s 00:20:35.685 10:09:24 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:35.685 [2024-06-10 10:09:25.073219] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:35.685 [2024-06-10 10:09:25.073392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80658 ] 00:20:35.943 [2024-06-10 10:09:25.247084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.202 [2024-06-10 10:09:25.478045] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.461 [2024-06-10 10:09:25.783571] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:36.461 [2024-06-10 10:09:25.783702] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:36.461 [2024-06-10 10:09:25.940364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.461 [2024-06-10 10:09:25.940421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:36.461 [2024-06-10 10:09:25.940456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:36.461 [2024-06-10 10:09:25.940467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.461 [2024-06-10 10:09:25.944008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.461 [2024-06-10 10:09:25.944073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:36.461 [2024-06-10 10:09:25.944091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.513 ms 00:20:36.461 [2024-06-10 10:09:25.944103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.461 [2024-06-10 10:09:25.944343] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:36.461 [2024-06-10 10:09:25.945460] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:36.461 [2024-06-10 10:09:25.945501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.461 [2024-06-10 10:09:25.945515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:36.461 [2024-06-10 10:09:25.945528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.171 ms 00:20:36.461 [2024-06-10 10:09:25.945539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.461 [2024-06-10 10:09:25.946827] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:36.461 [2024-06-10 10:09:25.964637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.461 [2024-06-10 10:09:25.964789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:36.461 [2024-06-10 10:09:25.964812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.810 ms 00:20:36.461 [2024-06-10 10:09:25.964833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.461 [2024-06-10 10:09:25.965055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.461 [2024-06-10 10:09:25.965080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:36.461 [2024-06-10 10:09:25.965094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:36.461 [2024-06-10 10:09:25.965105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.461 [2024-06-10 10:09:25.970234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.461 [2024-06-10 10:09:25.970285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:36.461 [2024-06-10 10:09:25.970346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.053 ms 00:20:36.461 [2024-06-10 10:09:25.970359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.461 [2024-06-10 10:09:25.970548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.461 [2024-06-10 10:09:25.970585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:36.461 [2024-06-10 10:09:25.970599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:20:36.461 [2024-06-10 10:09:25.970610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.461 [2024-06-10 10:09:25.970698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.461 [2024-06-10 10:09:25.970717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:36.461 [2024-06-10 10:09:25.970729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:36.461 [2024-06-10 10:09:25.970744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.461 [2024-06-10 10:09:25.970784] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:36.461 [2024-06-10 10:09:25.975342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.461 [2024-06-10 10:09:25.975385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:36.461 [2024-06-10 10:09:25.975402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.568 ms 00:20:36.461 [2024-06-10 10:09:25.975413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.461 [2024-06-10 10:09:25.975532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.461 [2024-06-10 10:09:25.975551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:36.461 [2024-06-10 10:09:25.975564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:36.462 [2024-06-10 10:09:25.975575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.462 [2024-06-10 10:09:25.975638] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:36.462 [2024-06-10 10:09:25.975678] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:36.462 [2024-06-10 10:09:25.975747] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:36.462 [2024-06-10 10:09:25.975773] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:36.462 [2024-06-10 10:09:25.975880] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:36.462 [2024-06-10 10:09:25.975895] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:36.462 [2024-06-10 10:09:25.975910] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:36.462 [2024-06-10 10:09:25.975926] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:36.462 [2024-06-10 10:09:25.975947] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:36.462 [2024-06-10 10:09:25.975963] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:36.462 [2024-06-10 10:09:25.975982] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:36.462 [2024-06-10 10:09:25.976007] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:36.462 [2024-06-10 10:09:25.976019] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:36.462 [2024-06-10 10:09:25.976031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.462 [2024-06-10 10:09:25.976042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:36.462 [2024-06-10 10:09:25.976054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:20:36.462 [2024-06-10 10:09:25.976069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.462 [2024-06-10 10:09:25.976186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.462 [2024-06-10 10:09:25.976208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:36.462 [2024-06-10 10:09:25.976221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:36.462 [2024-06-10 10:09:25.976232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.462 [2024-06-10 10:09:25.976362] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:36.462 [2024-06-10 10:09:25.976384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:36.462 [2024-06-10 10:09:25.976396] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:36.462 [2024-06-10 10:09:25.976408] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.462 [2024-06-10 10:09:25.976420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:36.462 [2024-06-10 10:09:25.976431] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:36.462 [2024-06-10 10:09:25.976442] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:36.462 [2024-06-10 10:09:25.976455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:36.462 [2024-06-10 10:09:25.976470] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:36.462 [2024-06-10 10:09:25.976480] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:36.462 [2024-06-10 10:09:25.976493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:36.462 [2024-06-10 10:09:25.976511] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:36.462 [2024-06-10 10:09:25.976525] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:36.462 [2024-06-10 10:09:25.976544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:36.462 [2024-06-10 10:09:25.976559] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:36.462 [2024-06-10 10:09:25.976570] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.462 [2024-06-10 10:09:25.976581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:36.462 [2024-06-10 10:09:25.976591] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:36.462 [2024-06-10 10:09:25.976600] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.462 [2024-06-10 10:09:25.976614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:36.462 [2024-06-10 10:09:25.976699] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:36.462 [2024-06-10 10:09:25.976719] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.462 [2024-06-10 10:09:25.976742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:36.462 [2024-06-10 10:09:25.976754] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:36.462 [2024-06-10 10:09:25.976764] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.462 [2024-06-10 10:09:25.976777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:36.462 [2024-06-10 10:09:25.976795] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:36.462 [2024-06-10 10:09:25.976811] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.462 [2024-06-10 10:09:25.976821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:36.462 [2024-06-10 10:09:25.976832] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:36.462 [2024-06-10 10:09:25.976844] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.462 [2024-06-10 10:09:25.976859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:36.462 [2024-06-10 10:09:25.976870] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:36.462 [2024-06-10 10:09:25.976880] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:36.462 [2024-06-10 10:09:25.976896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:36.462 [2024-06-10 10:09:25.976914] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:36.462 [2024-06-10 10:09:25.976927] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:36.462 [2024-06-10 10:09:25.976938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:36.462 [2024-06-10 10:09:25.976954] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:36.462 [2024-06-10 10:09:25.976973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.462 [2024-06-10 10:09:25.976984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:36.462 [2024-06-10 10:09:25.976994] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:36.462 [2024-06-10 10:09:25.977008] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.462 [2024-06-10 10:09:25.977026] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:36.462 [2024-06-10 10:09:25.977041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:36.462 [2024-06-10 10:09:25.977052] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:36.462 [2024-06-10 10:09:25.977063] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.462 [2024-06-10 10:09:25.977074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:36.462 [2024-06-10 10:09:25.977091] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:36.462 [2024-06-10 10:09:25.977102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:36.462 [2024-06-10 10:09:25.977113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:36.462 [2024-06-10 10:09:25.977125] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:36.462 [2024-06-10 10:09:25.977143] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:36.462 [2024-06-10 10:09:25.977160] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:36.462 [2024-06-10 10:09:25.977176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:36.462 [2024-06-10 10:09:25.977200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:36.462 [2024-06-10 10:09:25.977212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:36.462 [2024-06-10 10:09:25.977223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:36.462 [2024-06-10 10:09:25.977234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:36.462 [2024-06-10 10:09:25.977246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:36.462 [2024-06-10 10:09:25.977265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:36.462 [2024-06-10 10:09:25.977284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:36.462 [2024-06-10 10:09:25.977301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:36.462 [2024-06-10 10:09:25.977321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:36.722 [2024-06-10 10:09:25.977339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:36.722 [2024-06-10 10:09:25.977351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:36.722 [2024-06-10 10:09:25.977362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:36.722 [2024-06-10 10:09:25.977373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:36.722 [2024-06-10 10:09:25.977385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:36.722 [2024-06-10 10:09:25.977401] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:36.722 [2024-06-10 10:09:25.977414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:36.722 [2024-06-10 10:09:25.977435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:36.722 [2024-06-10 10:09:25.977451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:36.722 [2024-06-10 10:09:25.977464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:36.722 [2024-06-10 10:09:25.977479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:36.722 [2024-06-10 10:09:25.977492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.722 [2024-06-10 10:09:25.977503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:36.722 [2024-06-10 10:09:25.977517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.205 ms 00:20:36.722 [2024-06-10 10:09:25.977536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.722 [2024-06-10 10:09:26.017835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.722 [2024-06-10 10:09:26.017905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:36.722 [2024-06-10 10:09:26.017942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.201 ms 00:20:36.722 [2024-06-10 10:09:26.017953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.722 [2024-06-10 10:09:26.018160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.722 [2024-06-10 10:09:26.018180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:36.722 [2024-06-10 10:09:26.018201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:36.722 [2024-06-10 10:09:26.018215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.722 [2024-06-10 10:09:26.054534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.722 [2024-06-10 10:09:26.054596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:36.722 [2024-06-10 10:09:26.054631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.284 ms 00:20:36.722 [2024-06-10 10:09:26.054642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.722 [2024-06-10 10:09:26.054829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.722 [2024-06-10 10:09:26.054855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:36.722 [2024-06-10 10:09:26.054869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:36.722 [2024-06-10 10:09:26.054879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.722 [2024-06-10 10:09:26.055285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.722 [2024-06-10 10:09:26.055312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:36.722 [2024-06-10 10:09:26.055325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:20:36.722 [2024-06-10 10:09:26.055337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.722 [2024-06-10 10:09:26.055509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.722 [2024-06-10 10:09:26.055530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:36.722 [2024-06-10 10:09:26.055547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:20:36.722 [2024-06-10 10:09:26.055574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.722 [2024-06-10 10:09:26.071033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.722 [2024-06-10 10:09:26.071087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:36.722 [2024-06-10 10:09:26.071119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.416 ms 00:20:36.722 [2024-06-10 10:09:26.071130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.722 [2024-06-10 10:09:26.086291] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:36.722 [2024-06-10 10:09:26.086334] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:36.722 [2024-06-10 10:09:26.086368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.722 [2024-06-10 10:09:26.086379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:36.722 [2024-06-10 10:09:26.086391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.061 ms 00:20:36.722 [2024-06-10 10:09:26.086401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.722 [2024-06-10 10:09:26.114123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.722 [2024-06-10 10:09:26.114185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:36.722 [2024-06-10 10:09:26.114218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.636 ms 00:20:36.722 [2024-06-10 10:09:26.114230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.722 [2024-06-10 10:09:26.128886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.722 [2024-06-10 10:09:26.128942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:36.722 [2024-06-10 10:09:26.128973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.568 ms 00:20:36.722 [2024-06-10 10:09:26.128983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.722 [2024-06-10 10:09:26.143808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.722 [2024-06-10 10:09:26.143848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:36.722 [2024-06-10 10:09:26.143894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.739 ms 00:20:36.722 [2024-06-10 10:09:26.143921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.722 [2024-06-10 10:09:26.144849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.722 [2024-06-10 10:09:26.144885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:36.722 [2024-06-10 10:09:26.144917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:20:36.722 [2024-06-10 10:09:26.144934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.722 [2024-06-10 10:09:26.217701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.722 [2024-06-10 10:09:26.217797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:36.722 [2024-06-10 10:09:26.217834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.729 ms 00:20:36.722 [2024-06-10 10:09:26.217854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.722 [2024-06-10 10:09:26.230800] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:36.981 [2024-06-10 10:09:26.244997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.981 [2024-06-10 10:09:26.245067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:36.981 [2024-06-10 10:09:26.245087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.997 ms 00:20:36.981 [2024-06-10 10:09:26.245101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.981 [2024-06-10 10:09:26.245241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.981 [2024-06-10 10:09:26.245262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:36.981 [2024-06-10 10:09:26.245276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:36.981 [2024-06-10 10:09:26.245287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.981 [2024-06-10 10:09:26.245373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.981 [2024-06-10 10:09:26.245389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:36.981 [2024-06-10 10:09:26.245401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:36.981 [2024-06-10 10:09:26.245411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.981 [2024-06-10 10:09:26.245442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.981 [2024-06-10 10:09:26.245455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:36.981 [2024-06-10 10:09:26.245467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:36.981 [2024-06-10 10:09:26.245477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.981 [2024-06-10 10:09:26.245514] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:36.981 [2024-06-10 10:09:26.245531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.981 [2024-06-10 10:09:26.245542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:36.981 [2024-06-10 10:09:26.245553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:36.982 [2024-06-10 10:09:26.245563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.982 [2024-06-10 10:09:26.277184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.982 [2024-06-10 10:09:26.277231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:36.982 [2024-06-10 10:09:26.277249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.593 ms 00:20:36.982 [2024-06-10 10:09:26.277268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.982 [2024-06-10 10:09:26.277407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.982 [2024-06-10 10:09:26.277427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:36.982 [2024-06-10 10:09:26.277439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:36.982 [2024-06-10 10:09:26.277450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.982 [2024-06-10 10:09:26.278549] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:36.982 [2024-06-10 10:09:26.282676] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 337.795 ms, result 0 00:20:36.982 [2024-06-10 10:09:26.283551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:36.982 [2024-06-10 10:09:26.299635] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:47.787  Copying: 23/256 [MB] (23 MBps) Copying: 49/256 [MB] (25 MBps) Copying: 74/256 [MB] (25 MBps) Copying: 98/256 [MB] (24 MBps) Copying: 122/256 [MB] (23 MBps) Copying: 145/256 [MB] (23 MBps) Copying: 169/256 [MB] (23 MBps) Copying: 192/256 [MB] (23 MBps) Copying: 215/256 [MB] (23 MBps) Copying: 239/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-06-10 10:09:37.022539] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:47.787 [2024-06-10 10:09:37.034593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-06-10 10:09:37.034674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:47.787 [2024-06-10 10:09:37.034712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:47.787 [2024-06-10 10:09:37.034723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-06-10 10:09:37.034756] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:47.787 [2024-06-10 10:09:37.037901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-06-10 10:09:37.037933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:47.787 [2024-06-10 10:09:37.037963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.107 ms 00:20:47.787 [2024-06-10 10:09:37.037984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-06-10 10:09:37.039796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-06-10 10:09:37.039837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:47.787 [2024-06-10 10:09:37.039885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.783 ms 00:20:47.787 [2024-06-10 10:09:37.039897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-06-10 10:09:37.047163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-06-10 10:09:37.047214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:47.787 [2024-06-10 10:09:37.047231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.240 ms 00:20:47.787 [2024-06-10 10:09:37.047242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-06-10 10:09:37.054590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-06-10 10:09:37.054706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:47.787 [2024-06-10 10:09:37.054740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.267 ms 00:20:47.787 [2024-06-10 10:09:37.054751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-06-10 10:09:37.088447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-06-10 10:09:37.088510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:47.787 [2024-06-10 10:09:37.088562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.612 ms 00:20:47.787 [2024-06-10 10:09:37.088573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-06-10 10:09:37.106615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-06-10 10:09:37.106684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:47.787 [2024-06-10 10:09:37.106718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.933 ms 00:20:47.787 [2024-06-10 10:09:37.106729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-06-10 10:09:37.106914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-06-10 10:09:37.106950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:47.787 [2024-06-10 10:09:37.106967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:47.787 [2024-06-10 10:09:37.106978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-06-10 10:09:37.136194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-06-10 10:09:37.136260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:47.787 [2024-06-10 10:09:37.136293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.189 ms 00:20:47.787 [2024-06-10 10:09:37.136302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-06-10 10:09:37.166740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-06-10 10:09:37.166799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:47.787 [2024-06-10 10:09:37.166836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.340 ms 00:20:47.787 [2024-06-10 10:09:37.166848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-06-10 10:09:37.199774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-06-10 10:09:37.199844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:47.787 [2024-06-10 10:09:37.199894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.821 ms 00:20:47.787 [2024-06-10 10:09:37.199905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-06-10 10:09:37.231644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.787 [2024-06-10 10:09:37.231736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:47.787 [2024-06-10 10:09:37.231756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.579 ms 00:20:47.787 [2024-06-10 10:09:37.231767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.787 [2024-06-10 10:09:37.231866] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:47.787 [2024-06-10 10:09:37.231892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.231905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.231917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.231928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.231939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.231950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.231961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.231972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.231983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.231994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:47.787 [2024-06-10 10:09:37.232372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.232999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.233010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.233022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.233034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.233046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.233057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.233069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.233080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.233091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.233103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.233114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.233126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:47.788 [2024-06-10 10:09:37.233147] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:47.788 [2024-06-10 10:09:37.233158] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4e93524f-9e0d-42fe-9154-f58916c65969 00:20:47.788 [2024-06-10 10:09:37.233179] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:47.788 [2024-06-10 10:09:37.233190] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:47.788 [2024-06-10 10:09:37.233201] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:47.788 [2024-06-10 10:09:37.233212] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:47.788 [2024-06-10 10:09:37.233222] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:47.788 [2024-06-10 10:09:37.233247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:47.788 [2024-06-10 10:09:37.233258] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:47.788 [2024-06-10 10:09:37.233268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:47.788 [2024-06-10 10:09:37.233278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:47.788 [2024-06-10 10:09:37.233290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.788 [2024-06-10 10:09:37.233301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:47.788 [2024-06-10 10:09:37.233315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.425 ms 00:20:47.788 [2024-06-10 10:09:37.233326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.788 [2024-06-10 10:09:37.250274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.788 [2024-06-10 10:09:37.250359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:47.788 [2024-06-10 10:09:37.250409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.915 ms 00:20:47.788 [2024-06-10 10:09:37.250420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.788 [2024-06-10 10:09:37.250957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.788 [2024-06-10 10:09:37.250981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:47.788 [2024-06-10 10:09:37.250996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:20:47.788 [2024-06-10 10:09:37.251008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.788 [2024-06-10 10:09:37.290516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.788 [2024-06-10 10:09:37.290580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:47.788 [2024-06-10 10:09:37.290614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.788 [2024-06-10 10:09:37.290623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.788 [2024-06-10 10:09:37.290777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.788 [2024-06-10 10:09:37.290794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:47.788 [2024-06-10 10:09:37.290823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.788 [2024-06-10 10:09:37.290833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.788 [2024-06-10 10:09:37.290903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.788 [2024-06-10 10:09:37.290920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:47.788 [2024-06-10 10:09:37.290931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.788 [2024-06-10 10:09:37.290941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.788 [2024-06-10 10:09:37.290997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.789 [2024-06-10 10:09:37.291010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:47.789 [2024-06-10 10:09:37.291021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.789 [2024-06-10 10:09:37.291031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.048 [2024-06-10 10:09:37.390256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.048 [2024-06-10 10:09:37.390335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:48.048 [2024-06-10 10:09:37.390352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.048 [2024-06-10 10:09:37.390361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.048 [2024-06-10 10:09:37.477480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.048 [2024-06-10 10:09:37.477548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:48.048 [2024-06-10 10:09:37.477583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.048 [2024-06-10 10:09:37.477594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.048 [2024-06-10 10:09:37.477727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.048 [2024-06-10 10:09:37.477746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:48.048 [2024-06-10 10:09:37.477774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.048 [2024-06-10 10:09:37.477801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.048 [2024-06-10 10:09:37.477835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.048 [2024-06-10 10:09:37.477848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:48.048 [2024-06-10 10:09:37.477859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.048 [2024-06-10 10:09:37.477869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.048 [2024-06-10 10:09:37.478002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.048 [2024-06-10 10:09:37.478025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:48.048 [2024-06-10 10:09:37.478038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.048 [2024-06-10 10:09:37.478048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.048 [2024-06-10 10:09:37.478099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.048 [2024-06-10 10:09:37.478116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:48.048 [2024-06-10 10:09:37.478128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.048 [2024-06-10 10:09:37.478139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.048 [2024-06-10 10:09:37.478184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.048 [2024-06-10 10:09:37.478205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:48.048 [2024-06-10 10:09:37.478217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.048 [2024-06-10 10:09:37.478228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.048 [2024-06-10 10:09:37.478280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.048 [2024-06-10 10:09:37.478296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:48.048 [2024-06-10 10:09:37.478307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.048 [2024-06-10 10:09:37.478318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.048 [2024-06-10 10:09:37.478478] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 443.892 ms, result 0 00:20:49.422 00:20:49.422 00:20:49.422 10:09:38 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=80794 00:20:49.422 10:09:38 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:49.422 10:09:38 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 80794 00:20:49.422 10:09:38 ftl.ftl_trim -- common/autotest_common.sh@830 -- # '[' -z 80794 ']' 00:20:49.422 10:09:38 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.422 10:09:38 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:49.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.422 10:09:38 ftl.ftl_trim -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.422 10:09:38 ftl.ftl_trim -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:49.422 10:09:38 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:49.422 [2024-06-10 10:09:38.808260] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:49.422 [2024-06-10 10:09:38.808750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80794 ] 00:20:49.680 [2024-06-10 10:09:38.986856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.937 [2024-06-10 10:09:39.216321] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.505 10:09:39 ftl.ftl_trim -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:50.505 10:09:39 ftl.ftl_trim -- common/autotest_common.sh@863 -- # return 0 00:20:50.505 10:09:39 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:50.763 [2024-06-10 10:09:40.243445] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:50.763 [2024-06-10 10:09:40.243543] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:51.021 [2024-06-10 10:09:40.417190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.021 [2024-06-10 10:09:40.417254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:51.021 [2024-06-10 10:09:40.417294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:51.021 [2024-06-10 10:09:40.417307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.021 [2024-06-10 10:09:40.420513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.021 [2024-06-10 10:09:40.420555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:51.021 [2024-06-10 10:09:40.420591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.178 ms 00:20:51.021 [2024-06-10 10:09:40.420602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.021 [2024-06-10 10:09:40.420778] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:51.021 [2024-06-10 10:09:40.421853] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:51.021 [2024-06-10 10:09:40.421914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.021 [2024-06-10 10:09:40.421931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:51.021 [2024-06-10 10:09:40.421945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.131 ms 00:20:51.021 [2024-06-10 10:09:40.421957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.021 [2024-06-10 10:09:40.423210] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:51.021 [2024-06-10 10:09:40.440372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.021 [2024-06-10 10:09:40.440505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:51.021 [2024-06-10 10:09:40.440528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.151 ms 00:20:51.021 [2024-06-10 10:09:40.440548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.021 [2024-06-10 10:09:40.440747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.021 [2024-06-10 10:09:40.440775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:51.021 [2024-06-10 10:09:40.440791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:51.021 [2024-06-10 10:09:40.440805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.021 [2024-06-10 10:09:40.445918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.021 [2024-06-10 10:09:40.445986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:51.021 [2024-06-10 10:09:40.446004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.033 ms 00:20:51.021 [2024-06-10 10:09:40.446021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.021 [2024-06-10 10:09:40.446170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.021 [2024-06-10 10:09:40.446195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:51.021 [2024-06-10 10:09:40.446209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:51.021 [2024-06-10 10:09:40.446223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.021 [2024-06-10 10:09:40.446269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.021 [2024-06-10 10:09:40.446287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:51.021 [2024-06-10 10:09:40.446300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:51.021 [2024-06-10 10:09:40.446313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.021 [2024-06-10 10:09:40.446348] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:51.021 [2024-06-10 10:09:40.450784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.021 [2024-06-10 10:09:40.450824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:51.021 [2024-06-10 10:09:40.450861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.441 ms 00:20:51.021 [2024-06-10 10:09:40.450874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.021 [2024-06-10 10:09:40.450984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.021 [2024-06-10 10:09:40.451004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:51.021 [2024-06-10 10:09:40.451022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:51.021 [2024-06-10 10:09:40.451034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.021 [2024-06-10 10:09:40.451071] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:51.021 [2024-06-10 10:09:40.451099] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:51.021 [2024-06-10 10:09:40.451168] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:51.021 [2024-06-10 10:09:40.451196] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:51.021 [2024-06-10 10:09:40.451311] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:51.021 [2024-06-10 10:09:40.451332] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:51.021 [2024-06-10 10:09:40.451362] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:51.021 [2024-06-10 10:09:40.451386] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:51.021 [2024-06-10 10:09:40.451404] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:51.021 [2024-06-10 10:09:40.451427] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:51.021 [2024-06-10 10:09:40.451453] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:51.021 [2024-06-10 10:09:40.451471] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:51.021 [2024-06-10 10:09:40.451485] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:51.021 [2024-06-10 10:09:40.451498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.021 [2024-06-10 10:09:40.451514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:51.021 [2024-06-10 10:09:40.451539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:20:51.021 [2024-06-10 10:09:40.451553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.021 [2024-06-10 10:09:40.451686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.021 [2024-06-10 10:09:40.451711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:51.021 [2024-06-10 10:09:40.451725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:20:51.021 [2024-06-10 10:09:40.451738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.021 [2024-06-10 10:09:40.451869] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:51.021 [2024-06-10 10:09:40.451893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:51.021 [2024-06-10 10:09:40.451907] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:51.021 [2024-06-10 10:09:40.451926] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.021 [2024-06-10 10:09:40.451939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:51.021 [2024-06-10 10:09:40.451952] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:51.021 [2024-06-10 10:09:40.451963] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:51.021 [2024-06-10 10:09:40.451977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:51.021 [2024-06-10 10:09:40.451988] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:51.021 [2024-06-10 10:09:40.452006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:51.021 [2024-06-10 10:09:40.452027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:51.021 [2024-06-10 10:09:40.452047] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:51.021 [2024-06-10 10:09:40.452060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:51.021 [2024-06-10 10:09:40.452073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:51.021 [2024-06-10 10:09:40.452085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:51.021 [2024-06-10 10:09:40.452098] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.021 [2024-06-10 10:09:40.452110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:51.021 [2024-06-10 10:09:40.452123] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:51.021 [2024-06-10 10:09:40.452135] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.021 [2024-06-10 10:09:40.452149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:51.021 [2024-06-10 10:09:40.452162] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:51.021 [2024-06-10 10:09:40.452176] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.021 [2024-06-10 10:09:40.452187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:51.021 [2024-06-10 10:09:40.452200] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:51.021 [2024-06-10 10:09:40.452212] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.021 [2024-06-10 10:09:40.452227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:51.021 [2024-06-10 10:09:40.452238] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:51.021 [2024-06-10 10:09:40.452251] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.021 [2024-06-10 10:09:40.452263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:51.021 [2024-06-10 10:09:40.452291] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:51.021 [2024-06-10 10:09:40.452303] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.022 [2024-06-10 10:09:40.452316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:51.022 [2024-06-10 10:09:40.452328] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:51.022 [2024-06-10 10:09:40.452341] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:51.022 [2024-06-10 10:09:40.452353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:51.022 [2024-06-10 10:09:40.452366] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:51.022 [2024-06-10 10:09:40.452377] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:51.022 [2024-06-10 10:09:40.452390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:51.022 [2024-06-10 10:09:40.452402] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:51.022 [2024-06-10 10:09:40.452414] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.022 [2024-06-10 10:09:40.452426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:51.022 [2024-06-10 10:09:40.452441] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:51.022 [2024-06-10 10:09:40.452453] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.022 [2024-06-10 10:09:40.452465] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:51.022 [2024-06-10 10:09:40.452478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:51.022 [2024-06-10 10:09:40.452494] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:51.022 [2024-06-10 10:09:40.452506] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.022 [2024-06-10 10:09:40.452520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:51.022 [2024-06-10 10:09:40.452532] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:51.022 [2024-06-10 10:09:40.452545] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:51.022 [2024-06-10 10:09:40.452557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:51.022 [2024-06-10 10:09:40.452569] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:51.022 [2024-06-10 10:09:40.452582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:51.022 [2024-06-10 10:09:40.452599] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:51.022 [2024-06-10 10:09:40.452614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:51.022 [2024-06-10 10:09:40.452630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:51.022 [2024-06-10 10:09:40.452659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:51.022 [2024-06-10 10:09:40.452678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:51.022 [2024-06-10 10:09:40.452690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:51.022 [2024-06-10 10:09:40.452704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:51.022 [2024-06-10 10:09:40.452717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:51.022 [2024-06-10 10:09:40.452731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:51.022 [2024-06-10 10:09:40.452743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:51.022 [2024-06-10 10:09:40.452756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:51.022 [2024-06-10 10:09:40.452768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:51.022 [2024-06-10 10:09:40.452782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:51.022 [2024-06-10 10:09:40.452794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:51.022 [2024-06-10 10:09:40.452808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:51.022 [2024-06-10 10:09:40.452820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:51.022 [2024-06-10 10:09:40.452833] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:51.022 [2024-06-10 10:09:40.452847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:51.022 [2024-06-10 10:09:40.452861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:51.022 [2024-06-10 10:09:40.452874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:51.022 [2024-06-10 10:09:40.452889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:51.022 [2024-06-10 10:09:40.452901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:51.022 [2024-06-10 10:09:40.452917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.022 [2024-06-10 10:09:40.452929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:51.022 [2024-06-10 10:09:40.452943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.115 ms 00:20:51.022 [2024-06-10 10:09:40.452955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.022 [2024-06-10 10:09:40.488408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.022 [2024-06-10 10:09:40.488469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:51.022 [2024-06-10 10:09:40.488510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.334 ms 00:20:51.022 [2024-06-10 10:09:40.488523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.022 [2024-06-10 10:09:40.488744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.022 [2024-06-10 10:09:40.488767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:51.022 [2024-06-10 10:09:40.488783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:51.022 [2024-06-10 10:09:40.488795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.022 [2024-06-10 10:09:40.527609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.022 [2024-06-10 10:09:40.527709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:51.022 [2024-06-10 10:09:40.527741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.765 ms 00:20:51.022 [2024-06-10 10:09:40.527760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.022 [2024-06-10 10:09:40.527906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.022 [2024-06-10 10:09:40.527924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:51.022 [2024-06-10 10:09:40.527956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:51.022 [2024-06-10 10:09:40.527984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.022 [2024-06-10 10:09:40.528343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.022 [2024-06-10 10:09:40.528368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:51.022 [2024-06-10 10:09:40.528386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:20:51.022 [2024-06-10 10:09:40.528401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.022 [2024-06-10 10:09:40.528555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.022 [2024-06-10 10:09:40.528574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:51.022 [2024-06-10 10:09:40.528589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:20:51.022 [2024-06-10 10:09:40.528601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.279 [2024-06-10 10:09:40.546828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.279 [2024-06-10 10:09:40.546875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:51.279 [2024-06-10 10:09:40.546931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.196 ms 00:20:51.279 [2024-06-10 10:09:40.546943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.279 [2024-06-10 10:09:40.563855] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:51.279 [2024-06-10 10:09:40.563932] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:51.279 [2024-06-10 10:09:40.563989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.279 [2024-06-10 10:09:40.564003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:51.279 [2024-06-10 10:09:40.564019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.879 ms 00:20:51.279 [2024-06-10 10:09:40.564031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.279 [2024-06-10 10:09:40.595047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.279 [2024-06-10 10:09:40.595118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:51.279 [2024-06-10 10:09:40.595167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.894 ms 00:20:51.279 [2024-06-10 10:09:40.595182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.279 [2024-06-10 10:09:40.611020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.279 [2024-06-10 10:09:40.611210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:51.279 [2024-06-10 10:09:40.611339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.681 ms 00:20:51.279 [2024-06-10 10:09:40.611394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.279 [2024-06-10 10:09:40.626150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.279 [2024-06-10 10:09:40.626346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:51.279 [2024-06-10 10:09:40.626489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.609 ms 00:20:51.279 [2024-06-10 10:09:40.626513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.279 [2024-06-10 10:09:40.627518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.279 [2024-06-10 10:09:40.627586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:51.279 [2024-06-10 10:09:40.627620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:20:51.279 [2024-06-10 10:09:40.627632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.279 [2024-06-10 10:09:40.705390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.279 [2024-06-10 10:09:40.705784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:51.279 [2024-06-10 10:09:40.705825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.713 ms 00:20:51.279 [2024-06-10 10:09:40.705840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.279 [2024-06-10 10:09:40.718297] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:51.279 [2024-06-10 10:09:40.732224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.279 [2024-06-10 10:09:40.732308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:51.279 [2024-06-10 10:09:40.732328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.203 ms 00:20:51.279 [2024-06-10 10:09:40.732345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.279 [2024-06-10 10:09:40.732478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.279 [2024-06-10 10:09:40.732501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:51.279 [2024-06-10 10:09:40.732515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:51.279 [2024-06-10 10:09:40.732529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.279 [2024-06-10 10:09:40.732608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.279 [2024-06-10 10:09:40.732625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:51.279 [2024-06-10 10:09:40.732637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:51.279 [2024-06-10 10:09:40.732650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.279 [2024-06-10 10:09:40.732725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.279 [2024-06-10 10:09:40.732758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:51.279 [2024-06-10 10:09:40.732805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:51.279 [2024-06-10 10:09:40.732819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.279 [2024-06-10 10:09:40.732859] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:51.279 [2024-06-10 10:09:40.732877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.279 [2024-06-10 10:09:40.732888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:51.279 [2024-06-10 10:09:40.732904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:51.279 [2024-06-10 10:09:40.732931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.279 [2024-06-10 10:09:40.763021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.279 [2024-06-10 10:09:40.763063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:51.279 [2024-06-10 10:09:40.763100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.042 ms 00:20:51.279 [2024-06-10 10:09:40.763112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.279 [2024-06-10 10:09:40.763259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.279 [2024-06-10 10:09:40.763281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:51.279 [2024-06-10 10:09:40.763296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:51.279 [2024-06-10 10:09:40.763308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.279 [2024-06-10 10:09:40.764532] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:51.279 [2024-06-10 10:09:40.768478] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 346.877 ms, result 0 00:20:51.279 [2024-06-10 10:09:40.769666] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:51.537 Some configs were skipped because the RPC state that can call them passed over. 00:20:51.537 10:09:40 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:51.795 [2024-06-10 10:09:41.067508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.795 [2024-06-10 10:09:41.067929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:51.795 [2024-06-10 10:09:41.068064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.649 ms 00:20:51.795 [2024-06-10 10:09:41.068141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.795 [2024-06-10 10:09:41.068329] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.468 ms, result 0 00:20:51.795 true 00:20:51.795 10:09:41 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:52.052 [2024-06-10 10:09:41.359568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.052 [2024-06-10 10:09:41.359659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:52.052 [2024-06-10 10:09:41.359686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:20:52.052 [2024-06-10 10:09:41.359699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.052 [2024-06-10 10:09:41.359768] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.365 ms, result 0 00:20:52.052 true 00:20:52.052 10:09:41 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 80794 00:20:52.052 10:09:41 ftl.ftl_trim -- common/autotest_common.sh@949 -- # '[' -z 80794 ']' 00:20:52.052 10:09:41 ftl.ftl_trim -- common/autotest_common.sh@953 -- # kill -0 80794 00:20:52.052 10:09:41 ftl.ftl_trim -- common/autotest_common.sh@954 -- # uname 00:20:52.052 10:09:41 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:52.052 10:09:41 ftl.ftl_trim -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 80794 00:20:52.052 killing process with pid 80794 00:20:52.052 10:09:41 ftl.ftl_trim -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:52.052 10:09:41 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:52.052 10:09:41 ftl.ftl_trim -- common/autotest_common.sh@967 -- # echo 'killing process with pid 80794' 00:20:52.052 10:09:41 ftl.ftl_trim -- common/autotest_common.sh@968 -- # kill 80794 00:20:52.052 10:09:41 ftl.ftl_trim -- common/autotest_common.sh@973 -- # wait 80794 00:20:53.027 [2024-06-10 10:09:42.332157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.027 [2024-06-10 10:09:42.332227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:53.027 [2024-06-10 10:09:42.332248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:53.028 [2024-06-10 10:09:42.332262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.028 [2024-06-10 10:09:42.332292] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:53.028 [2024-06-10 10:09:42.335643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.028 [2024-06-10 10:09:42.335700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:53.028 [2024-06-10 10:09:42.335723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.326 ms 00:20:53.028 [2024-06-10 10:09:42.335735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.028 [2024-06-10 10:09:42.336036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.028 [2024-06-10 10:09:42.336060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:53.028 [2024-06-10 10:09:42.336076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:20:53.028 [2024-06-10 10:09:42.336087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.028 [2024-06-10 10:09:42.340278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.028 [2024-06-10 10:09:42.340322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:53.028 [2024-06-10 10:09:42.340343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.162 ms 00:20:53.028 [2024-06-10 10:09:42.340357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.028 [2024-06-10 10:09:42.347596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.028 [2024-06-10 10:09:42.347629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:53.028 [2024-06-10 10:09:42.347696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.189 ms 00:20:53.028 [2024-06-10 10:09:42.347710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.028 [2024-06-10 10:09:42.359770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.028 [2024-06-10 10:09:42.359809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:53.028 [2024-06-10 10:09:42.359845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.997 ms 00:20:53.028 [2024-06-10 10:09:42.359857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.028 [2024-06-10 10:09:42.368592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.028 [2024-06-10 10:09:42.368635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:53.028 [2024-06-10 10:09:42.368686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.685 ms 00:20:53.028 [2024-06-10 10:09:42.368702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.028 [2024-06-10 10:09:42.368863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.028 [2024-06-10 10:09:42.368884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:53.028 [2024-06-10 10:09:42.368900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:53.028 [2024-06-10 10:09:42.368912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.028 [2024-06-10 10:09:42.382098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.028 [2024-06-10 10:09:42.382146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:53.028 [2024-06-10 10:09:42.382183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.157 ms 00:20:53.028 [2024-06-10 10:09:42.382196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.028 [2024-06-10 10:09:42.395069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.028 [2024-06-10 10:09:42.395105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:53.028 [2024-06-10 10:09:42.395165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.823 ms 00:20:53.028 [2024-06-10 10:09:42.395178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.028 [2024-06-10 10:09:42.407473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.028 [2024-06-10 10:09:42.407529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:53.028 [2024-06-10 10:09:42.407564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.238 ms 00:20:53.028 [2024-06-10 10:09:42.407591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.028 [2024-06-10 10:09:42.420143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.028 [2024-06-10 10:09:42.420188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:53.028 [2024-06-10 10:09:42.420209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.440 ms 00:20:53.028 [2024-06-10 10:09:42.420221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.028 [2024-06-10 10:09:42.420271] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:53.028 [2024-06-10 10:09:42.420297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:53.028 [2024-06-10 10:09:42.420834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.420853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.420869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.420882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.420898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.420910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.420924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.420937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.420951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.420964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.420977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.420990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:53.029 [2024-06-10 10:09:42.421722] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:53.029 [2024-06-10 10:09:42.421737] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4e93524f-9e0d-42fe-9154-f58916c65969 00:20:53.029 [2024-06-10 10:09:42.421753] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:53.029 [2024-06-10 10:09:42.421768] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:53.029 [2024-06-10 10:09:42.421780] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:53.029 [2024-06-10 10:09:42.421794] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:53.029 [2024-06-10 10:09:42.421805] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:53.029 [2024-06-10 10:09:42.421818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:53.029 [2024-06-10 10:09:42.421831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:53.030 [2024-06-10 10:09:42.421843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:53.030 [2024-06-10 10:09:42.421853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:53.030 [2024-06-10 10:09:42.421867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.030 [2024-06-10 10:09:42.421892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:53.030 [2024-06-10 10:09:42.421908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.600 ms 00:20:53.030 [2024-06-10 10:09:42.421920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.030 [2024-06-10 10:09:42.438982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.030 [2024-06-10 10:09:42.439040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:53.030 [2024-06-10 10:09:42.439063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.011 ms 00:20:53.030 [2024-06-10 10:09:42.439075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.030 [2024-06-10 10:09:42.439602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.030 [2024-06-10 10:09:42.439636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:53.030 [2024-06-10 10:09:42.439690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:20:53.030 [2024-06-10 10:09:42.439703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.030 [2024-06-10 10:09:42.494850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.030 [2024-06-10 10:09:42.494923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:53.030 [2024-06-10 10:09:42.494963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.030 [2024-06-10 10:09:42.494975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.030 [2024-06-10 10:09:42.495117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.030 [2024-06-10 10:09:42.495164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:53.030 [2024-06-10 10:09:42.495182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.030 [2024-06-10 10:09:42.495195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.030 [2024-06-10 10:09:42.495275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.030 [2024-06-10 10:09:42.495294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:53.030 [2024-06-10 10:09:42.495309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.030 [2024-06-10 10:09:42.495321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.030 [2024-06-10 10:09:42.495353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.030 [2024-06-10 10:09:42.495368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:53.030 [2024-06-10 10:09:42.495382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.030 [2024-06-10 10:09:42.495394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.288 [2024-06-10 10:09:42.592333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.288 [2024-06-10 10:09:42.592394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:53.288 [2024-06-10 10:09:42.592432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.288 [2024-06-10 10:09:42.592444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.288 [2024-06-10 10:09:42.676961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.288 [2024-06-10 10:09:42.677029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:53.288 [2024-06-10 10:09:42.677053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.288 [2024-06-10 10:09:42.677072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.288 [2024-06-10 10:09:42.677183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.288 [2024-06-10 10:09:42.677202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:53.288 [2024-06-10 10:09:42.677218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.288 [2024-06-10 10:09:42.677231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.288 [2024-06-10 10:09:42.677273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.288 [2024-06-10 10:09:42.677288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:53.288 [2024-06-10 10:09:42.677302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.288 [2024-06-10 10:09:42.677314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.288 [2024-06-10 10:09:42.677447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.288 [2024-06-10 10:09:42.677469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:53.288 [2024-06-10 10:09:42.677484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.288 [2024-06-10 10:09:42.677497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.288 [2024-06-10 10:09:42.677554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.288 [2024-06-10 10:09:42.677574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:53.288 [2024-06-10 10:09:42.677588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.288 [2024-06-10 10:09:42.677600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.288 [2024-06-10 10:09:42.677678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.288 [2024-06-10 10:09:42.677698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:53.288 [2024-06-10 10:09:42.677717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.288 [2024-06-10 10:09:42.677730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.288 [2024-06-10 10:09:42.677792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.288 [2024-06-10 10:09:42.677809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:53.288 [2024-06-10 10:09:42.677824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.288 [2024-06-10 10:09:42.677837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.288 [2024-06-10 10:09:42.678001] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 345.821 ms, result 0 00:20:54.222 10:09:43 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:54.222 10:09:43 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:54.222 [2024-06-10 10:09:43.724800] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:54.222 [2024-06-10 10:09:43.724987] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80858 ] 00:20:54.513 [2024-06-10 10:09:43.899064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.771 [2024-06-10 10:09:44.083048] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.029 [2024-06-10 10:09:44.395428] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:55.029 [2024-06-10 10:09:44.395563] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:55.292 [2024-06-10 10:09:44.554269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.292 [2024-06-10 10:09:44.554348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:55.292 [2024-06-10 10:09:44.554381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:55.292 [2024-06-10 10:09:44.554394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.292 [2024-06-10 10:09:44.557676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.292 [2024-06-10 10:09:44.557722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:55.292 [2024-06-10 10:09:44.557740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.250 ms 00:20:55.292 [2024-06-10 10:09:44.557751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.292 [2024-06-10 10:09:44.557910] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:55.292 [2024-06-10 10:09:44.558896] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:55.292 [2024-06-10 10:09:44.558941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.292 [2024-06-10 10:09:44.558956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:55.292 [2024-06-10 10:09:44.558969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:20:55.292 [2024-06-10 10:09:44.558981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.292 [2024-06-10 10:09:44.560318] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:55.292 [2024-06-10 10:09:44.577003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.292 [2024-06-10 10:09:44.577091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:55.292 [2024-06-10 10:09:44.577111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.683 ms 00:20:55.292 [2024-06-10 10:09:44.577130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.292 [2024-06-10 10:09:44.577310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.292 [2024-06-10 10:09:44.577333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:55.292 [2024-06-10 10:09:44.577347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:55.292 [2024-06-10 10:09:44.577359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.292 [2024-06-10 10:09:44.582323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.292 [2024-06-10 10:09:44.582371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:55.292 [2024-06-10 10:09:44.582396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.902 ms 00:20:55.292 [2024-06-10 10:09:44.582407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.292 [2024-06-10 10:09:44.582565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.292 [2024-06-10 10:09:44.582587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:55.292 [2024-06-10 10:09:44.582600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:55.292 [2024-06-10 10:09:44.582611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.292 [2024-06-10 10:09:44.582676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.292 [2024-06-10 10:09:44.582703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:55.292 [2024-06-10 10:09:44.582717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:55.292 [2024-06-10 10:09:44.582733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.292 [2024-06-10 10:09:44.582772] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:55.292 [2024-06-10 10:09:44.587162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.292 [2024-06-10 10:09:44.587206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:55.292 [2024-06-10 10:09:44.587223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.383 ms 00:20:55.292 [2024-06-10 10:09:44.587235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.292 [2024-06-10 10:09:44.587336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.292 [2024-06-10 10:09:44.587356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:55.292 [2024-06-10 10:09:44.587370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:55.292 [2024-06-10 10:09:44.587381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.292 [2024-06-10 10:09:44.587414] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:55.292 [2024-06-10 10:09:44.587443] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:55.292 [2024-06-10 10:09:44.587491] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:55.292 [2024-06-10 10:09:44.587513] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:55.292 [2024-06-10 10:09:44.587632] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:55.292 [2024-06-10 10:09:44.587648] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:55.292 [2024-06-10 10:09:44.587685] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:55.292 [2024-06-10 10:09:44.587700] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:55.292 [2024-06-10 10:09:44.587730] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:55.292 [2024-06-10 10:09:44.587743] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:55.292 [2024-06-10 10:09:44.587754] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:55.292 [2024-06-10 10:09:44.587771] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:55.292 [2024-06-10 10:09:44.587781] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:55.292 [2024-06-10 10:09:44.587793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.292 [2024-06-10 10:09:44.587804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:55.292 [2024-06-10 10:09:44.587816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:20:55.292 [2024-06-10 10:09:44.587827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.292 [2024-06-10 10:09:44.587926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.292 [2024-06-10 10:09:44.587942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:55.292 [2024-06-10 10:09:44.587955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:55.292 [2024-06-10 10:09:44.587965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.292 [2024-06-10 10:09:44.588081] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:55.292 [2024-06-10 10:09:44.588104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:55.292 [2024-06-10 10:09:44.588117] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:55.292 [2024-06-10 10:09:44.588128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.292 [2024-06-10 10:09:44.588140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:55.292 [2024-06-10 10:09:44.588151] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:55.292 [2024-06-10 10:09:44.588162] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:55.292 [2024-06-10 10:09:44.588172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:55.292 [2024-06-10 10:09:44.588183] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:55.292 [2024-06-10 10:09:44.588193] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:55.292 [2024-06-10 10:09:44.588204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:55.292 [2024-06-10 10:09:44.588214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:55.292 [2024-06-10 10:09:44.588224] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:55.292 [2024-06-10 10:09:44.588234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:55.292 [2024-06-10 10:09:44.588245] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:55.292 [2024-06-10 10:09:44.588255] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.292 [2024-06-10 10:09:44.588265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:55.292 [2024-06-10 10:09:44.588275] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:55.292 [2024-06-10 10:09:44.588285] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.292 [2024-06-10 10:09:44.588296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:55.292 [2024-06-10 10:09:44.588320] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:55.292 [2024-06-10 10:09:44.588331] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.292 [2024-06-10 10:09:44.588341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:55.292 [2024-06-10 10:09:44.588352] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:55.292 [2024-06-10 10:09:44.588364] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.292 [2024-06-10 10:09:44.588375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:55.292 [2024-06-10 10:09:44.588385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:55.292 [2024-06-10 10:09:44.588395] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.292 [2024-06-10 10:09:44.588405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:55.292 [2024-06-10 10:09:44.588415] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:55.292 [2024-06-10 10:09:44.588425] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.292 [2024-06-10 10:09:44.588435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:55.292 [2024-06-10 10:09:44.588445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:55.292 [2024-06-10 10:09:44.588455] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:55.292 [2024-06-10 10:09:44.588465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:55.293 [2024-06-10 10:09:44.588476] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:55.293 [2024-06-10 10:09:44.588486] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:55.293 [2024-06-10 10:09:44.588496] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:55.293 [2024-06-10 10:09:44.588507] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:55.293 [2024-06-10 10:09:44.588516] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.293 [2024-06-10 10:09:44.588527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:55.293 [2024-06-10 10:09:44.588537] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:55.293 [2024-06-10 10:09:44.588547] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.293 [2024-06-10 10:09:44.588557] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:55.293 [2024-06-10 10:09:44.588568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:55.293 [2024-06-10 10:09:44.588579] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:55.293 [2024-06-10 10:09:44.588590] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.293 [2024-06-10 10:09:44.588602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:55.293 [2024-06-10 10:09:44.588612] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:55.293 [2024-06-10 10:09:44.588622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:55.293 [2024-06-10 10:09:44.588633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:55.293 [2024-06-10 10:09:44.588659] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:55.293 [2024-06-10 10:09:44.588670] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:55.293 [2024-06-10 10:09:44.588682] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:55.293 [2024-06-10 10:09:44.588696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:55.293 [2024-06-10 10:09:44.588714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:55.293 [2024-06-10 10:09:44.588727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:55.293 [2024-06-10 10:09:44.588738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:55.293 [2024-06-10 10:09:44.588749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:55.293 [2024-06-10 10:09:44.588760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:55.293 [2024-06-10 10:09:44.588772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:55.293 [2024-06-10 10:09:44.588783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:55.293 [2024-06-10 10:09:44.588794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:55.293 [2024-06-10 10:09:44.588806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:55.293 [2024-06-10 10:09:44.588817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:55.293 [2024-06-10 10:09:44.588827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:55.293 [2024-06-10 10:09:44.588839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:55.293 [2024-06-10 10:09:44.588850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:55.293 [2024-06-10 10:09:44.588861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:55.293 [2024-06-10 10:09:44.588872] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:55.293 [2024-06-10 10:09:44.588885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:55.293 [2024-06-10 10:09:44.588897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:55.293 [2024-06-10 10:09:44.588908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:55.293 [2024-06-10 10:09:44.588920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:55.293 [2024-06-10 10:09:44.588931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:55.293 [2024-06-10 10:09:44.588944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.293 [2024-06-10 10:09:44.588955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:55.293 [2024-06-10 10:09:44.588967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:20:55.293 [2024-06-10 10:09:44.588978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.293 [2024-06-10 10:09:44.630004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.293 [2024-06-10 10:09:44.630074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:55.293 [2024-06-10 10:09:44.630097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.951 ms 00:20:55.293 [2024-06-10 10:09:44.630109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.293 [2024-06-10 10:09:44.630313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.293 [2024-06-10 10:09:44.630334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:55.293 [2024-06-10 10:09:44.630355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:55.293 [2024-06-10 10:09:44.630371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.293 [2024-06-10 10:09:44.669501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.293 [2024-06-10 10:09:44.669584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:55.293 [2024-06-10 10:09:44.669605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.096 ms 00:20:55.293 [2024-06-10 10:09:44.669617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.293 [2024-06-10 10:09:44.669780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.293 [2024-06-10 10:09:44.669807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:55.293 [2024-06-10 10:09:44.669821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:55.293 [2024-06-10 10:09:44.669832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.293 [2024-06-10 10:09:44.670180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.293 [2024-06-10 10:09:44.670205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:55.293 [2024-06-10 10:09:44.670219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:20:55.293 [2024-06-10 10:09:44.670230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.293 [2024-06-10 10:09:44.670389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.293 [2024-06-10 10:09:44.670408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:55.293 [2024-06-10 10:09:44.670424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:20:55.293 [2024-06-10 10:09:44.670436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.293 [2024-06-10 10:09:44.686973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.293 [2024-06-10 10:09:44.687060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:55.293 [2024-06-10 10:09:44.687081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.505 ms 00:20:55.293 [2024-06-10 10:09:44.687092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.293 [2024-06-10 10:09:44.703878] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:55.293 [2024-06-10 10:09:44.703946] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:55.293 [2024-06-10 10:09:44.703968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.293 [2024-06-10 10:09:44.703981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:55.293 [2024-06-10 10:09:44.703995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.674 ms 00:20:55.293 [2024-06-10 10:09:44.704006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.293 [2024-06-10 10:09:44.734425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.293 [2024-06-10 10:09:44.734537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:55.293 [2024-06-10 10:09:44.734559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.247 ms 00:20:55.293 [2024-06-10 10:09:44.734570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.293 [2024-06-10 10:09:44.751142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.293 [2024-06-10 10:09:44.751226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:55.293 [2024-06-10 10:09:44.751246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.358 ms 00:20:55.293 [2024-06-10 10:09:44.751258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.293 [2024-06-10 10:09:44.767227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.293 [2024-06-10 10:09:44.767308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:55.293 [2024-06-10 10:09:44.767329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.831 ms 00:20:55.293 [2024-06-10 10:09:44.767340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.293 [2024-06-10 10:09:44.768239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.293 [2024-06-10 10:09:44.768276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:55.293 [2024-06-10 10:09:44.768298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:20:55.293 [2024-06-10 10:09:44.768310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.553 [2024-06-10 10:09:44.842126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.553 [2024-06-10 10:09:44.842200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:55.553 [2024-06-10 10:09:44.842228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.780 ms 00:20:55.553 [2024-06-10 10:09:44.842240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.553 [2024-06-10 10:09:44.855168] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:55.553 [2024-06-10 10:09:44.869448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.553 [2024-06-10 10:09:44.869516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:55.553 [2024-06-10 10:09:44.869553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.061 ms 00:20:55.553 [2024-06-10 10:09:44.869565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.553 [2024-06-10 10:09:44.869741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.553 [2024-06-10 10:09:44.869765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:55.553 [2024-06-10 10:09:44.869784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:55.553 [2024-06-10 10:09:44.869795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.553 [2024-06-10 10:09:44.869866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.553 [2024-06-10 10:09:44.869884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:55.553 [2024-06-10 10:09:44.869896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:55.553 [2024-06-10 10:09:44.869907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.553 [2024-06-10 10:09:44.869940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.553 [2024-06-10 10:09:44.869954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:55.553 [2024-06-10 10:09:44.869966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:55.553 [2024-06-10 10:09:44.869977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.553 [2024-06-10 10:09:44.870022] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:55.553 [2024-06-10 10:09:44.870038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.553 [2024-06-10 10:09:44.870050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:55.553 [2024-06-10 10:09:44.870061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:55.553 [2024-06-10 10:09:44.870072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.553 [2024-06-10 10:09:44.901919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.553 [2024-06-10 10:09:44.901970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:55.553 [2024-06-10 10:09:44.901994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.816 ms 00:20:55.553 [2024-06-10 10:09:44.902006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.553 [2024-06-10 10:09:44.902133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.553 [2024-06-10 10:09:44.902153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:55.553 [2024-06-10 10:09:44.902166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:55.553 [2024-06-10 10:09:44.902177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.553 [2024-06-10 10:09:44.903177] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:55.553 [2024-06-10 10:09:44.907214] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 348.524 ms, result 0 00:20:55.553 [2024-06-10 10:09:44.907933] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:55.553 [2024-06-10 10:09:44.924279] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:06.379  Copying: 27/256 [MB] (27 MBps) Copying: 51/256 [MB] (23 MBps) Copying: 74/256 [MB] (23 MBps) Copying: 97/256 [MB] (22 MBps) Copying: 120/256 [MB] (23 MBps) Copying: 144/256 [MB] (23 MBps) Copying: 166/256 [MB] (22 MBps) Copying: 189/256 [MB] (22 MBps) Copying: 212/256 [MB] (23 MBps) Copying: 234/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-06-10 10:09:55.854086] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:06.379 [2024-06-10 10:09:55.866387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.379 [2024-06-10 10:09:55.866428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:06.379 [2024-06-10 10:09:55.866463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:06.379 [2024-06-10 10:09:55.866473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.379 [2024-06-10 10:09:55.866501] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:06.379 [2024-06-10 10:09:55.869830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.379 [2024-06-10 10:09:55.869865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:06.379 [2024-06-10 10:09:55.869903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.309 ms 00:21:06.379 [2024-06-10 10:09:55.869914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.379 [2024-06-10 10:09:55.870232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.379 [2024-06-10 10:09:55.870249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:06.379 [2024-06-10 10:09:55.870260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:21:06.379 [2024-06-10 10:09:55.870270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.379 [2024-06-10 10:09:55.873954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.379 [2024-06-10 10:09:55.873990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:06.379 [2024-06-10 10:09:55.874021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.663 ms 00:21:06.379 [2024-06-10 10:09:55.874031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.379 [2024-06-10 10:09:55.881015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.379 [2024-06-10 10:09:55.881050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:06.379 [2024-06-10 10:09:55.881081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.952 ms 00:21:06.379 [2024-06-10 10:09:55.881091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.639 [2024-06-10 10:09:55.910566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.639 [2024-06-10 10:09:55.910607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:06.639 [2024-06-10 10:09:55.910639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.406 ms 00:21:06.639 [2024-06-10 10:09:55.910649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.639 [2024-06-10 10:09:55.927327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.639 [2024-06-10 10:09:55.927368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:06.639 [2024-06-10 10:09:55.927401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.583 ms 00:21:06.639 [2024-06-10 10:09:55.927427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.639 [2024-06-10 10:09:55.927614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.639 [2024-06-10 10:09:55.927636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:06.639 [2024-06-10 10:09:55.927647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:21:06.639 [2024-06-10 10:09:55.927657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.639 [2024-06-10 10:09:55.957336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.639 [2024-06-10 10:09:55.957387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:06.639 [2024-06-10 10:09:55.957420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.646 ms 00:21:06.639 [2024-06-10 10:09:55.957430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.639 [2024-06-10 10:09:55.985812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.639 [2024-06-10 10:09:55.985850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:06.639 [2024-06-10 10:09:55.985882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.325 ms 00:21:06.639 [2024-06-10 10:09:55.985891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.639 [2024-06-10 10:09:56.014287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.639 [2024-06-10 10:09:56.014326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:06.639 [2024-06-10 10:09:56.014358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.316 ms 00:21:06.639 [2024-06-10 10:09:56.014367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.639 [2024-06-10 10:09:56.043537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.639 [2024-06-10 10:09:56.043591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:06.639 [2024-06-10 10:09:56.043622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.086 ms 00:21:06.639 [2024-06-10 10:09:56.043632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.639 [2024-06-10 10:09:56.043719] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:06.639 [2024-06-10 10:09:56.043743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:06.639 [2024-06-10 10:09:56.043757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:06.639 [2024-06-10 10:09:56.043768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:06.639 [2024-06-10 10:09:56.043778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:06.639 [2024-06-10 10:09:56.043789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:06.639 [2024-06-10 10:09:56.043800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:06.639 [2024-06-10 10:09:56.043810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:06.639 [2024-06-10 10:09:56.043820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:06.639 [2024-06-10 10:09:56.043831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:06.639 [2024-06-10 10:09:56.043841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:06.639 [2024-06-10 10:09:56.043852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:06.639 [2024-06-10 10:09:56.043862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:06.639 [2024-06-10 10:09:56.043872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:06.639 [2024-06-10 10:09:56.043882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:06.639 [2024-06-10 10:09:56.043893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:06.639 [2024-06-10 10:09:56.043903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.043914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.043940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.043951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.043961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.043971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.043982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.043992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.044995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:06.640 [2024-06-10 10:09:56.045319] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:06.640 [2024-06-10 10:09:56.045349] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4e93524f-9e0d-42fe-9154-f58916c65969 00:21:06.640 [2024-06-10 10:09:56.045361] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:06.640 [2024-06-10 10:09:56.045371] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:06.640 [2024-06-10 10:09:56.045381] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:06.640 [2024-06-10 10:09:56.045391] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:06.640 [2024-06-10 10:09:56.045413] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:06.640 [2024-06-10 10:09:56.045424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:06.641 [2024-06-10 10:09:56.045434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:06.641 [2024-06-10 10:09:56.045465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:06.641 [2024-06-10 10:09:56.045483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:06.641 [2024-06-10 10:09:56.045501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.641 [2024-06-10 10:09:56.045523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:06.641 [2024-06-10 10:09:56.045545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.783 ms 00:21:06.641 [2024-06-10 10:09:56.045566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.641 [2024-06-10 10:09:56.061769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.641 [2024-06-10 10:09:56.061811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:06.641 [2024-06-10 10:09:56.061829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.153 ms 00:21:06.641 [2024-06-10 10:09:56.061841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.641 [2024-06-10 10:09:56.062291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.641 [2024-06-10 10:09:56.062313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:06.641 [2024-06-10 10:09:56.062342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:21:06.641 [2024-06-10 10:09:56.062360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.641 [2024-06-10 10:09:56.103339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.641 [2024-06-10 10:09:56.103392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:06.641 [2024-06-10 10:09:56.103409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.641 [2024-06-10 10:09:56.103435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.641 [2024-06-10 10:09:56.103552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.641 [2024-06-10 10:09:56.103568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:06.641 [2024-06-10 10:09:56.103579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.641 [2024-06-10 10:09:56.103596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.641 [2024-06-10 10:09:56.103655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.641 [2024-06-10 10:09:56.103713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:06.641 [2024-06-10 10:09:56.103742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.641 [2024-06-10 10:09:56.103752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.641 [2024-06-10 10:09:56.103777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.641 [2024-06-10 10:09:56.103791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:06.641 [2024-06-10 10:09:56.103802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.641 [2024-06-10 10:09:56.103812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-06-10 10:09:56.194620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.899 [2024-06-10 10:09:56.194689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:06.899 [2024-06-10 10:09:56.194723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.899 [2024-06-10 10:09:56.194734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-06-10 10:09:56.272242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.899 [2024-06-10 10:09:56.272313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:06.899 [2024-06-10 10:09:56.272363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.899 [2024-06-10 10:09:56.272386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-06-10 10:09:56.272462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.899 [2024-06-10 10:09:56.272477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:06.899 [2024-06-10 10:09:56.272488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.899 [2024-06-10 10:09:56.272498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-06-10 10:09:56.272529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.899 [2024-06-10 10:09:56.272541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:06.899 [2024-06-10 10:09:56.272551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.899 [2024-06-10 10:09:56.272561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-06-10 10:09:56.272720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.899 [2024-06-10 10:09:56.272741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:06.899 [2024-06-10 10:09:56.272752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.899 [2024-06-10 10:09:56.272762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-06-10 10:09:56.272814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.899 [2024-06-10 10:09:56.272831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:06.899 [2024-06-10 10:09:56.272843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.899 [2024-06-10 10:09:56.272853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-06-10 10:09:56.272903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.899 [2024-06-10 10:09:56.272917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:06.899 [2024-06-10 10:09:56.272928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.899 [2024-06-10 10:09:56.272938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-06-10 10:09:56.272988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.899 [2024-06-10 10:09:56.273004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:06.899 [2024-06-10 10:09:56.273014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.899 [2024-06-10 10:09:56.273024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-06-10 10:09:56.273194] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 406.797 ms, result 0 00:21:07.842 00:21:07.842 00:21:07.842 10:09:57 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:21:07.842 10:09:57 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:08.429 10:09:57 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:08.429 [2024-06-10 10:09:57.938955] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:21:08.429 [2024-06-10 10:09:57.939107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81002 ] 00:21:08.687 [2024-06-10 10:09:58.103017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.946 [2024-06-10 10:09:58.309342] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.205 [2024-06-10 10:09:58.602901] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:09.205 [2024-06-10 10:09:58.602985] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:09.465 [2024-06-10 10:09:58.756815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.465 [2024-06-10 10:09:58.756879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:09.465 [2024-06-10 10:09:58.756915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:09.465 [2024-06-10 10:09:58.756925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.465 [2024-06-10 10:09:58.759991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.465 [2024-06-10 10:09:58.760045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:09.465 [2024-06-10 10:09:58.760077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.039 ms 00:21:09.465 [2024-06-10 10:09:58.760088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.465 [2024-06-10 10:09:58.760229] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:09.466 [2024-06-10 10:09:58.761241] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:09.466 [2024-06-10 10:09:58.761283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.466 [2024-06-10 10:09:58.761314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:09.466 [2024-06-10 10:09:58.761325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:21:09.466 [2024-06-10 10:09:58.761335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.466 [2024-06-10 10:09:58.762636] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:09.466 [2024-06-10 10:09:58.777357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.466 [2024-06-10 10:09:58.777397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:09.466 [2024-06-10 10:09:58.777430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.722 ms 00:21:09.466 [2024-06-10 10:09:58.777446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.466 [2024-06-10 10:09:58.777552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.466 [2024-06-10 10:09:58.777572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:09.466 [2024-06-10 10:09:58.777584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:09.466 [2024-06-10 10:09:58.777594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.466 [2024-06-10 10:09:58.782161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.466 [2024-06-10 10:09:58.782200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:09.466 [2024-06-10 10:09:58.782237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.498 ms 00:21:09.466 [2024-06-10 10:09:58.782247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.466 [2024-06-10 10:09:58.782353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.466 [2024-06-10 10:09:58.782372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:09.466 [2024-06-10 10:09:58.782383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:09.466 [2024-06-10 10:09:58.782393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.466 [2024-06-10 10:09:58.782431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.466 [2024-06-10 10:09:58.782445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:09.466 [2024-06-10 10:09:58.782456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:09.466 [2024-06-10 10:09:58.782469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.466 [2024-06-10 10:09:58.782496] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:09.466 [2024-06-10 10:09:58.786474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.466 [2024-06-10 10:09:58.786510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:09.466 [2024-06-10 10:09:58.786541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.985 ms 00:21:09.466 [2024-06-10 10:09:58.786551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.466 [2024-06-10 10:09:58.786612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.466 [2024-06-10 10:09:58.786628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:09.466 [2024-06-10 10:09:58.786639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:09.466 [2024-06-10 10:09:58.786648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.466 [2024-06-10 10:09:58.786713] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:09.466 [2024-06-10 10:09:58.786743] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:09.466 [2024-06-10 10:09:58.786786] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:09.466 [2024-06-10 10:09:58.786805] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:09.466 [2024-06-10 10:09:58.786898] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:09.466 [2024-06-10 10:09:58.786912] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:09.466 [2024-06-10 10:09:58.786925] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:09.466 [2024-06-10 10:09:58.786938] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:09.466 [2024-06-10 10:09:58.786949] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:09.466 [2024-06-10 10:09:58.786960] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:09.466 [2024-06-10 10:09:58.786970] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:09.466 [2024-06-10 10:09:58.786984] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:09.466 [2024-06-10 10:09:58.786993] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:09.466 [2024-06-10 10:09:58.787004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.466 [2024-06-10 10:09:58.787013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:09.466 [2024-06-10 10:09:58.787024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:21:09.466 [2024-06-10 10:09:58.787049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.466 [2024-06-10 10:09:58.787157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.466 [2024-06-10 10:09:58.787188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:09.466 [2024-06-10 10:09:58.787199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:09.466 [2024-06-10 10:09:58.787208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.466 [2024-06-10 10:09:58.787312] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:09.466 [2024-06-10 10:09:58.787327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:09.466 [2024-06-10 10:09:58.787338] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:09.466 [2024-06-10 10:09:58.787349] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.466 [2024-06-10 10:09:58.787360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:09.466 [2024-06-10 10:09:58.787370] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:09.466 [2024-06-10 10:09:58.787379] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:09.466 [2024-06-10 10:09:58.787390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:09.466 [2024-06-10 10:09:58.787400] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:09.466 [2024-06-10 10:09:58.787410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:09.466 [2024-06-10 10:09:58.787419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:09.466 [2024-06-10 10:09:58.787429] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:09.466 [2024-06-10 10:09:58.787438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:09.466 [2024-06-10 10:09:58.787448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:09.466 [2024-06-10 10:09:58.787457] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:09.466 [2024-06-10 10:09:58.787483] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.466 [2024-06-10 10:09:58.787493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:09.466 [2024-06-10 10:09:58.787502] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:09.466 [2024-06-10 10:09:58.787511] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.466 [2024-06-10 10:09:58.787520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:09.466 [2024-06-10 10:09:58.787542] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:09.466 [2024-06-10 10:09:58.787552] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.466 [2024-06-10 10:09:58.787562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:09.466 [2024-06-10 10:09:58.787586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:09.466 [2024-06-10 10:09:58.787594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.466 [2024-06-10 10:09:58.787603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:09.466 [2024-06-10 10:09:58.787613] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:09.466 [2024-06-10 10:09:58.787622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.466 [2024-06-10 10:09:58.787631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:09.466 [2024-06-10 10:09:58.787640] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:09.466 [2024-06-10 10:09:58.787649] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.466 [2024-06-10 10:09:58.787658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:09.466 [2024-06-10 10:09:58.787667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:09.466 [2024-06-10 10:09:58.787676] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:09.466 [2024-06-10 10:09:58.787685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:09.466 [2024-06-10 10:09:58.787693] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:09.466 [2024-06-10 10:09:58.787739] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:09.466 [2024-06-10 10:09:58.787751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:09.466 [2024-06-10 10:09:58.787761] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:09.466 [2024-06-10 10:09:58.787770] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.466 [2024-06-10 10:09:58.787779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:09.466 [2024-06-10 10:09:58.787789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:09.466 [2024-06-10 10:09:58.787799] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.466 [2024-06-10 10:09:58.787808] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:09.466 [2024-06-10 10:09:58.787834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:09.466 [2024-06-10 10:09:58.787845] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:09.466 [2024-06-10 10:09:58.787855] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.466 [2024-06-10 10:09:58.787867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:09.466 [2024-06-10 10:09:58.787877] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:09.467 [2024-06-10 10:09:58.787887] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:09.467 [2024-06-10 10:09:58.787896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:09.467 [2024-06-10 10:09:58.787906] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:09.467 [2024-06-10 10:09:58.787920] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:09.467 [2024-06-10 10:09:58.787931] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:09.467 [2024-06-10 10:09:58.787944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:09.467 [2024-06-10 10:09:58.787961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:09.467 [2024-06-10 10:09:58.787971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:09.467 [2024-06-10 10:09:58.787981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:09.467 [2024-06-10 10:09:58.787991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:09.467 [2024-06-10 10:09:58.788002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:09.467 [2024-06-10 10:09:58.788011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:09.467 [2024-06-10 10:09:58.788021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:09.467 [2024-06-10 10:09:58.788031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:09.467 [2024-06-10 10:09:58.788042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:09.467 [2024-06-10 10:09:58.788052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:09.467 [2024-06-10 10:09:58.788062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:09.467 [2024-06-10 10:09:58.788072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:09.467 [2024-06-10 10:09:58.788081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:09.467 [2024-06-10 10:09:58.788092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:09.467 [2024-06-10 10:09:58.788118] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:09.467 [2024-06-10 10:09:58.788146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:09.467 [2024-06-10 10:09:58.788172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:09.467 [2024-06-10 10:09:58.788183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:09.467 [2024-06-10 10:09:58.788193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:09.467 [2024-06-10 10:09:58.788203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:09.467 [2024-06-10 10:09:58.788215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.467 [2024-06-10 10:09:58.788226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:09.467 [2024-06-10 10:09:58.788236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:21:09.467 [2024-06-10 10:09:58.788247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.467 [2024-06-10 10:09:58.827893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.467 [2024-06-10 10:09:58.827949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:09.467 [2024-06-10 10:09:58.827985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.559 ms 00:21:09.467 [2024-06-10 10:09:58.827996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.467 [2024-06-10 10:09:58.828191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.467 [2024-06-10 10:09:58.828211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:09.467 [2024-06-10 10:09:58.828229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:09.467 [2024-06-10 10:09:58.828243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.467 [2024-06-10 10:09:58.862706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.467 [2024-06-10 10:09:58.862751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:09.467 [2024-06-10 10:09:58.862767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.432 ms 00:21:09.467 [2024-06-10 10:09:58.862778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.467 [2024-06-10 10:09:58.862880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.467 [2024-06-10 10:09:58.862902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:09.467 [2024-06-10 10:09:58.862914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:09.467 [2024-06-10 10:09:58.862923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.467 [2024-06-10 10:09:58.863281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.467 [2024-06-10 10:09:58.863300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:09.467 [2024-06-10 10:09:58.863311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:21:09.467 [2024-06-10 10:09:58.863321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.467 [2024-06-10 10:09:58.863485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.467 [2024-06-10 10:09:58.863502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:09.467 [2024-06-10 10:09:58.863516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:21:09.467 [2024-06-10 10:09:58.863526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.467 [2024-06-10 10:09:58.878512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.467 [2024-06-10 10:09:58.878555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:09.467 [2024-06-10 10:09:58.878590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.943 ms 00:21:09.467 [2024-06-10 10:09:58.878602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.467 [2024-06-10 10:09:58.893910] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:09.467 [2024-06-10 10:09:58.893949] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:09.467 [2024-06-10 10:09:58.893983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.467 [2024-06-10 10:09:58.893994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:09.467 [2024-06-10 10:09:58.894006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.194 ms 00:21:09.467 [2024-06-10 10:09:58.894015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.467 [2024-06-10 10:09:58.923527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.467 [2024-06-10 10:09:58.923569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:09.467 [2024-06-10 10:09:58.923586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.429 ms 00:21:09.467 [2024-06-10 10:09:58.923612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.467 [2024-06-10 10:09:58.939223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.467 [2024-06-10 10:09:58.939267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:09.467 [2024-06-10 10:09:58.939284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.495 ms 00:21:09.467 [2024-06-10 10:09:58.939295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.467 [2024-06-10 10:09:58.954356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.467 [2024-06-10 10:09:58.954403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:09.467 [2024-06-10 10:09:58.954434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.969 ms 00:21:09.467 [2024-06-10 10:09:58.954444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.467 [2024-06-10 10:09:58.955326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.467 [2024-06-10 10:09:58.955363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:09.467 [2024-06-10 10:09:58.955383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:21:09.467 [2024-06-10 10:09:58.955395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.727 [2024-06-10 10:09:59.021679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.727 [2024-06-10 10:09:59.021745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:09.727 [2024-06-10 10:09:59.021786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.248 ms 00:21:09.727 [2024-06-10 10:09:59.021797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.727 [2024-06-10 10:09:59.033103] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:09.727 [2024-06-10 10:09:59.045929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.727 [2024-06-10 10:09:59.045988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:09.727 [2024-06-10 10:09:59.046023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.990 ms 00:21:09.727 [2024-06-10 10:09:59.046034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.727 [2024-06-10 10:09:59.046159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.727 [2024-06-10 10:09:59.046178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:09.727 [2024-06-10 10:09:59.046194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:09.727 [2024-06-10 10:09:59.046204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.727 [2024-06-10 10:09:59.046266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.727 [2024-06-10 10:09:59.046280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:09.727 [2024-06-10 10:09:59.046291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:09.727 [2024-06-10 10:09:59.046301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.727 [2024-06-10 10:09:59.046327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.727 [2024-06-10 10:09:59.046338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:09.727 [2024-06-10 10:09:59.046348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:09.727 [2024-06-10 10:09:59.046358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.727 [2024-06-10 10:09:59.046402] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:09.727 [2024-06-10 10:09:59.046418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.727 [2024-06-10 10:09:59.046428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:09.727 [2024-06-10 10:09:59.046438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:09.727 [2024-06-10 10:09:59.046448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.727 [2024-06-10 10:09:59.074584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.727 [2024-06-10 10:09:59.074703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:09.727 [2024-06-10 10:09:59.074751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.108 ms 00:21:09.727 [2024-06-10 10:09:59.074762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.727 [2024-06-10 10:09:59.075014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.727 [2024-06-10 10:09:59.075036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:09.727 [2024-06-10 10:09:59.075049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:09.727 [2024-06-10 10:09:59.075060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.727 [2024-06-10 10:09:59.076418] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:09.727 [2024-06-10 10:09:59.080729] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 319.214 ms, result 0 00:21:09.727 [2024-06-10 10:09:59.081875] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:09.727 [2024-06-10 10:09:59.097892] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:09.987  Copying: 4096/4096 [kB] (average 24 MBps)[2024-06-10 10:09:59.265833] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:09.987 [2024-06-10 10:09:59.278245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.987 [2024-06-10 10:09:59.278291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:09.987 [2024-06-10 10:09:59.278312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:09.987 [2024-06-10 10:09:59.278324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.987 [2024-06-10 10:09:59.278355] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:09.987 [2024-06-10 10:09:59.281759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.987 [2024-06-10 10:09:59.281811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:09.987 [2024-06-10 10:09:59.281850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.382 ms 00:21:09.987 [2024-06-10 10:09:59.281867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.987 [2024-06-10 10:09:59.283536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.987 [2024-06-10 10:09:59.283595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:09.987 [2024-06-10 10:09:59.283628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.636 ms 00:21:09.987 [2024-06-10 10:09:59.283639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.987 [2024-06-10 10:09:59.287754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.987 [2024-06-10 10:09:59.287793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:09.987 [2024-06-10 10:09:59.287810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.059 ms 00:21:09.987 [2024-06-10 10:09:59.287821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.987 [2024-06-10 10:09:59.295339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.987 [2024-06-10 10:09:59.295400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:09.987 [2024-06-10 10:09:59.295417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.450 ms 00:21:09.987 [2024-06-10 10:09:59.295429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.987 [2024-06-10 10:09:59.325028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.987 [2024-06-10 10:09:59.325113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:09.987 [2024-06-10 10:09:59.325148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.452 ms 00:21:09.987 [2024-06-10 10:09:59.325158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.987 [2024-06-10 10:09:59.342304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.987 [2024-06-10 10:09:59.342363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:09.987 [2024-06-10 10:09:59.342398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.021 ms 00:21:09.987 [2024-06-10 10:09:59.342408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.987 [2024-06-10 10:09:59.342584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.987 [2024-06-10 10:09:59.342607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:09.987 [2024-06-10 10:09:59.342620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:09.987 [2024-06-10 10:09:59.342630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.987 [2024-06-10 10:09:59.371197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.987 [2024-06-10 10:09:59.371239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:09.987 [2024-06-10 10:09:59.371263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.512 ms 00:21:09.987 [2024-06-10 10:09:59.371274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.987 [2024-06-10 10:09:59.400444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.987 [2024-06-10 10:09:59.400501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:09.987 [2024-06-10 10:09:59.400534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.092 ms 00:21:09.988 [2024-06-10 10:09:59.400544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.988 [2024-06-10 10:09:59.430011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.988 [2024-06-10 10:09:59.430071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:09.988 [2024-06-10 10:09:59.430106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.399 ms 00:21:09.988 [2024-06-10 10:09:59.430116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.988 [2024-06-10 10:09:59.458183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.988 [2024-06-10 10:09:59.458247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:09.988 [2024-06-10 10:09:59.458282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.939 ms 00:21:09.988 [2024-06-10 10:09:59.458292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.988 [2024-06-10 10:09:59.458379] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:09.988 [2024-06-10 10:09:59.458404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.458991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:09.988 [2024-06-10 10:09:59.459450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:09.989 [2024-06-10 10:09:59.459701] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:09.989 [2024-06-10 10:09:59.459722] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4e93524f-9e0d-42fe-9154-f58916c65969 00:21:09.989 [2024-06-10 10:09:59.459734] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:09.989 [2024-06-10 10:09:59.459745] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:09.989 [2024-06-10 10:09:59.459757] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:09.989 [2024-06-10 10:09:59.459768] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:09.989 [2024-06-10 10:09:59.459791] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:09.989 [2024-06-10 10:09:59.459803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:09.989 [2024-06-10 10:09:59.459813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:09.989 [2024-06-10 10:09:59.459823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:09.989 [2024-06-10 10:09:59.459833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:09.989 [2024-06-10 10:09:59.459845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.989 [2024-06-10 10:09:59.459871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:09.989 [2024-06-10 10:09:59.459883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.468 ms 00:21:09.989 [2024-06-10 10:09:59.459909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.989 [2024-06-10 10:09:59.474841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.989 [2024-06-10 10:09:59.474884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:09.989 [2024-06-10 10:09:59.474916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.902 ms 00:21:09.989 [2024-06-10 10:09:59.474927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.989 [2024-06-10 10:09:59.475411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.989 [2024-06-10 10:09:59.475442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:09.989 [2024-06-10 10:09:59.475456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:21:09.989 [2024-06-10 10:09:59.475490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.248 [2024-06-10 10:09:59.513227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.248 [2024-06-10 10:09:59.513293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:10.248 [2024-06-10 10:09:59.513326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.248 [2024-06-10 10:09:59.513337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.248 [2024-06-10 10:09:59.513447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.248 [2024-06-10 10:09:59.513463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:10.248 [2024-06-10 10:09:59.513489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.248 [2024-06-10 10:09:59.513506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.248 [2024-06-10 10:09:59.513585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.248 [2024-06-10 10:09:59.513602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:10.248 [2024-06-10 10:09:59.513613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.248 [2024-06-10 10:09:59.513623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.248 [2024-06-10 10:09:59.513645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.248 [2024-06-10 10:09:59.513657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:10.248 [2024-06-10 10:09:59.513668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.248 [2024-06-10 10:09:59.513694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.248 [2024-06-10 10:09:59.615338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.248 [2024-06-10 10:09:59.615394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:10.248 [2024-06-10 10:09:59.615412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.248 [2024-06-10 10:09:59.615429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.248 [2024-06-10 10:09:59.696640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.248 [2024-06-10 10:09:59.696739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:10.248 [2024-06-10 10:09:59.696775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.248 [2024-06-10 10:09:59.696811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.248 [2024-06-10 10:09:59.696896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.248 [2024-06-10 10:09:59.696912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:10.248 [2024-06-10 10:09:59.696923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.249 [2024-06-10 10:09:59.696950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.249 [2024-06-10 10:09:59.696999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.249 [2024-06-10 10:09:59.697013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:10.249 [2024-06-10 10:09:59.697024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.249 [2024-06-10 10:09:59.697050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.249 [2024-06-10 10:09:59.697190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.249 [2024-06-10 10:09:59.697214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:10.249 [2024-06-10 10:09:59.697227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.249 [2024-06-10 10:09:59.697238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.249 [2024-06-10 10:09:59.697290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.249 [2024-06-10 10:09:59.697307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:10.249 [2024-06-10 10:09:59.697318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.249 [2024-06-10 10:09:59.697329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.249 [2024-06-10 10:09:59.697381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.249 [2024-06-10 10:09:59.697397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:10.249 [2024-06-10 10:09:59.697408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.249 [2024-06-10 10:09:59.697433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.249 [2024-06-10 10:09:59.697484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.249 [2024-06-10 10:09:59.697500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:10.249 [2024-06-10 10:09:59.697511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.249 [2024-06-10 10:09:59.697522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.249 [2024-06-10 10:09:59.697677] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 419.428 ms, result 0 00:21:11.186 00:21:11.186 00:21:11.186 10:10:00 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=81038 00:21:11.186 10:10:00 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:11.186 10:10:00 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 81038 00:21:11.186 10:10:00 ftl.ftl_trim -- common/autotest_common.sh@830 -- # '[' -z 81038 ']' 00:21:11.186 10:10:00 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.186 10:10:00 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:11.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.186 10:10:00 ftl.ftl_trim -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.186 10:10:00 ftl.ftl_trim -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:11.186 10:10:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:11.445 [2024-06-10 10:10:00.793678] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:21:11.445 [2024-06-10 10:10:00.794126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81038 ] 00:21:11.445 [2024-06-10 10:10:00.946511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.705 [2024-06-10 10:10:01.119549] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.642 10:10:01 ftl.ftl_trim -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:12.642 10:10:01 ftl.ftl_trim -- common/autotest_common.sh@863 -- # return 0 00:21:12.642 10:10:01 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:12.642 [2024-06-10 10:10:02.122343] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:12.642 [2024-06-10 10:10:02.122434] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:12.901 [2024-06-10 10:10:02.291273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.901 [2024-06-10 10:10:02.291348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:12.901 [2024-06-10 10:10:02.291388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:12.901 [2024-06-10 10:10:02.291401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.901 [2024-06-10 10:10:02.294602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.901 [2024-06-10 10:10:02.294693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:12.901 [2024-06-10 10:10:02.294717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.171 ms 00:21:12.901 [2024-06-10 10:10:02.294729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.901 [2024-06-10 10:10:02.294890] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:12.901 [2024-06-10 10:10:02.295860] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:12.901 [2024-06-10 10:10:02.295950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.901 [2024-06-10 10:10:02.295987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:12.901 [2024-06-10 10:10:02.296001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.057 ms 00:21:12.901 [2024-06-10 10:10:02.296013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.901 [2024-06-10 10:10:02.297328] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:12.901 [2024-06-10 10:10:02.314032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.901 [2024-06-10 10:10:02.314167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:12.901 [2024-06-10 10:10:02.314189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.704 ms 00:21:12.901 [2024-06-10 10:10:02.314203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.901 [2024-06-10 10:10:02.314421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.901 [2024-06-10 10:10:02.314446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:12.901 [2024-06-10 10:10:02.314459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:12.901 [2024-06-10 10:10:02.314472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.901 [2024-06-10 10:10:02.319718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.901 [2024-06-10 10:10:02.319787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:12.901 [2024-06-10 10:10:02.319805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.183 ms 00:21:12.901 [2024-06-10 10:10:02.319821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.901 [2024-06-10 10:10:02.320019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.901 [2024-06-10 10:10:02.320043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:12.901 [2024-06-10 10:10:02.320058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:21:12.901 [2024-06-10 10:10:02.320070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.901 [2024-06-10 10:10:02.320116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.901 [2024-06-10 10:10:02.320135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:12.901 [2024-06-10 10:10:02.320147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:12.901 [2024-06-10 10:10:02.320159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.901 [2024-06-10 10:10:02.320192] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:12.901 [2024-06-10 10:10:02.324442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.901 [2024-06-10 10:10:02.324650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:12.901 [2024-06-10 10:10:02.324811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.256 ms 00:21:12.901 [2024-06-10 10:10:02.324865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.901 [2024-06-10 10:10:02.325058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.901 [2024-06-10 10:10:02.325133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:12.901 [2024-06-10 10:10:02.325254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:12.901 [2024-06-10 10:10:02.325400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.901 [2024-06-10 10:10:02.325539] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:12.901 [2024-06-10 10:10:02.325684] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:12.901 [2024-06-10 10:10:02.325875] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:12.901 [2024-06-10 10:10:02.325907] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:12.901 [2024-06-10 10:10:02.326015] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:12.901 [2024-06-10 10:10:02.326033] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:12.901 [2024-06-10 10:10:02.326049] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:12.901 [2024-06-10 10:10:02.326067] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:12.901 [2024-06-10 10:10:02.326083] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:12.901 [2024-06-10 10:10:02.326095] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:12.901 [2024-06-10 10:10:02.326108] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:12.901 [2024-06-10 10:10:02.326118] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:12.901 [2024-06-10 10:10:02.326131] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:12.901 [2024-06-10 10:10:02.326144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.901 [2024-06-10 10:10:02.326159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:12.901 [2024-06-10 10:10:02.326171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:21:12.901 [2024-06-10 10:10:02.326184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.901 [2024-06-10 10:10:02.326278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.901 [2024-06-10 10:10:02.326300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:12.901 [2024-06-10 10:10:02.326312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:12.901 [2024-06-10 10:10:02.326325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.901 [2024-06-10 10:10:02.326446] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:12.901 [2024-06-10 10:10:02.326470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:12.901 [2024-06-10 10:10:02.326482] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:12.901 [2024-06-10 10:10:02.326498] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.901 [2024-06-10 10:10:02.326511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:12.901 [2024-06-10 10:10:02.326523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:12.901 [2024-06-10 10:10:02.326534] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:12.901 [2024-06-10 10:10:02.326546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:12.901 [2024-06-10 10:10:02.326558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:12.901 [2024-06-10 10:10:02.326572] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:12.901 [2024-06-10 10:10:02.326583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:12.901 [2024-06-10 10:10:02.326595] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:12.901 [2024-06-10 10:10:02.326605] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:12.901 [2024-06-10 10:10:02.326618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:12.901 [2024-06-10 10:10:02.326628] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:12.901 [2024-06-10 10:10:02.326662] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.901 [2024-06-10 10:10:02.326678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:12.901 [2024-06-10 10:10:02.326692] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:12.901 [2024-06-10 10:10:02.326704] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.901 [2024-06-10 10:10:02.326716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:12.901 [2024-06-10 10:10:02.326744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:12.901 [2024-06-10 10:10:02.326756] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.901 [2024-06-10 10:10:02.326767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:12.901 [2024-06-10 10:10:02.326779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:12.901 [2024-06-10 10:10:02.326790] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.901 [2024-06-10 10:10:02.326804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:12.901 [2024-06-10 10:10:02.326816] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:12.901 [2024-06-10 10:10:02.326828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.901 [2024-06-10 10:10:02.326839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:12.901 [2024-06-10 10:10:02.326867] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:12.901 [2024-06-10 10:10:02.326878] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.901 [2024-06-10 10:10:02.326890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:12.901 [2024-06-10 10:10:02.326901] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:12.901 [2024-06-10 10:10:02.326914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:12.901 [2024-06-10 10:10:02.326924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:12.901 [2024-06-10 10:10:02.326937] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:12.902 [2024-06-10 10:10:02.326948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:12.902 [2024-06-10 10:10:02.326960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:12.902 [2024-06-10 10:10:02.326970] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:12.902 [2024-06-10 10:10:02.326982] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.902 [2024-06-10 10:10:02.326993] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:12.902 [2024-06-10 10:10:02.327008] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:12.902 [2024-06-10 10:10:02.327018] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.902 [2024-06-10 10:10:02.327030] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:12.902 [2024-06-10 10:10:02.327041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:12.902 [2024-06-10 10:10:02.327058] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:12.902 [2024-06-10 10:10:02.327069] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.902 [2024-06-10 10:10:02.327082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:12.902 [2024-06-10 10:10:02.327093] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:12.902 [2024-06-10 10:10:02.327106] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:12.902 [2024-06-10 10:10:02.327117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:12.902 [2024-06-10 10:10:02.327129] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:12.902 [2024-06-10 10:10:02.327169] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:12.902 [2024-06-10 10:10:02.327186] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:12.902 [2024-06-10 10:10:02.327201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:12.902 [2024-06-10 10:10:02.327217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:12.902 [2024-06-10 10:10:02.327229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:12.902 [2024-06-10 10:10:02.327245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:12.902 [2024-06-10 10:10:02.327257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:12.902 [2024-06-10 10:10:02.327270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:12.902 [2024-06-10 10:10:02.327283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:12.902 [2024-06-10 10:10:02.327297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:12.902 [2024-06-10 10:10:02.327309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:12.902 [2024-06-10 10:10:02.327323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:12.902 [2024-06-10 10:10:02.327335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:12.902 [2024-06-10 10:10:02.327349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:12.902 [2024-06-10 10:10:02.327360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:12.902 [2024-06-10 10:10:02.327374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:12.902 [2024-06-10 10:10:02.327386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:12.902 [2024-06-10 10:10:02.327399] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:12.902 [2024-06-10 10:10:02.327412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:12.902 [2024-06-10 10:10:02.327427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:12.902 [2024-06-10 10:10:02.327439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:12.902 [2024-06-10 10:10:02.327454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:12.902 [2024-06-10 10:10:02.327466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:12.902 [2024-06-10 10:10:02.327481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.902 [2024-06-10 10:10:02.327493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:12.902 [2024-06-10 10:10:02.327508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:21:12.902 [2024-06-10 10:10:02.327519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.902 [2024-06-10 10:10:02.358760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.902 [2024-06-10 10:10:02.358834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:12.902 [2024-06-10 10:10:02.358875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.137 ms 00:21:12.902 [2024-06-10 10:10:02.358887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.902 [2024-06-10 10:10:02.359079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.902 [2024-06-10 10:10:02.359111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:12.902 [2024-06-10 10:10:02.359126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:12.902 [2024-06-10 10:10:02.359180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.902 [2024-06-10 10:10:02.398307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.902 [2024-06-10 10:10:02.398371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:12.902 [2024-06-10 10:10:02.398416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.092 ms 00:21:12.902 [2024-06-10 10:10:02.398428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.902 [2024-06-10 10:10:02.398592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.902 [2024-06-10 10:10:02.398612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:12.902 [2024-06-10 10:10:02.398629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:12.902 [2024-06-10 10:10:02.398642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.902 [2024-06-10 10:10:02.399014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.902 [2024-06-10 10:10:02.399034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:12.902 [2024-06-10 10:10:02.399066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:21:12.902 [2024-06-10 10:10:02.399080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.902 [2024-06-10 10:10:02.399251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.902 [2024-06-10 10:10:02.399277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:12.902 [2024-06-10 10:10:02.399293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:21:12.902 [2024-06-10 10:10:02.399305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.161 [2024-06-10 10:10:02.417970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.161 [2024-06-10 10:10:02.418037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:13.161 [2024-06-10 10:10:02.418078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.633 ms 00:21:13.161 [2024-06-10 10:10:02.418090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.161 [2024-06-10 10:10:02.434567] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:13.161 [2024-06-10 10:10:02.434611] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:13.161 [2024-06-10 10:10:02.434650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.161 [2024-06-10 10:10:02.434704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:13.161 [2024-06-10 10:10:02.434722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.412 ms 00:21:13.161 [2024-06-10 10:10:02.434751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.161 [2024-06-10 10:10:02.462821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.161 [2024-06-10 10:10:02.462889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:13.161 [2024-06-10 10:10:02.462928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.963 ms 00:21:13.161 [2024-06-10 10:10:02.462940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.161 [2024-06-10 10:10:02.477558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.161 [2024-06-10 10:10:02.477637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:13.161 [2024-06-10 10:10:02.477691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.483 ms 00:21:13.161 [2024-06-10 10:10:02.477703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.161 [2024-06-10 10:10:02.492734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.161 [2024-06-10 10:10:02.492805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:13.161 [2024-06-10 10:10:02.492845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.838 ms 00:21:13.161 [2024-06-10 10:10:02.492857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.161 [2024-06-10 10:10:02.493694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.161 [2024-06-10 10:10:02.493721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:13.161 [2024-06-10 10:10:02.493739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:21:13.161 [2024-06-10 10:10:02.493752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.161 [2024-06-10 10:10:02.568663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.161 [2024-06-10 10:10:02.568748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:13.161 [2024-06-10 10:10:02.568790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.867 ms 00:21:13.161 [2024-06-10 10:10:02.568801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.161 [2024-06-10 10:10:02.580782] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:13.161 [2024-06-10 10:10:02.595717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.161 [2024-06-10 10:10:02.595810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:13.161 [2024-06-10 10:10:02.595848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.747 ms 00:21:13.161 [2024-06-10 10:10:02.595864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.161 [2024-06-10 10:10:02.596032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.161 [2024-06-10 10:10:02.596056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:13.161 [2024-06-10 10:10:02.596071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:13.161 [2024-06-10 10:10:02.596084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.161 [2024-06-10 10:10:02.596152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.161 [2024-06-10 10:10:02.596173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:13.161 [2024-06-10 10:10:02.596186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:13.161 [2024-06-10 10:10:02.596200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.161 [2024-06-10 10:10:02.596236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.161 [2024-06-10 10:10:02.596257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:13.161 [2024-06-10 10:10:02.596270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:13.161 [2024-06-10 10:10:02.596284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.161 [2024-06-10 10:10:02.596324] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:13.161 [2024-06-10 10:10:02.596343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.161 [2024-06-10 10:10:02.596355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:13.161 [2024-06-10 10:10:02.596371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:13.161 [2024-06-10 10:10:02.596398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.161 [2024-06-10 10:10:02.629735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.162 [2024-06-10 10:10:02.629792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:13.162 [2024-06-10 10:10:02.629830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.305 ms 00:21:13.162 [2024-06-10 10:10:02.629842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.162 [2024-06-10 10:10:02.629969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.162 [2024-06-10 10:10:02.629988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:13.162 [2024-06-10 10:10:02.630003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:13.162 [2024-06-10 10:10:02.630014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.162 [2024-06-10 10:10:02.631098] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:13.162 [2024-06-10 10:10:02.634846] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 339.450 ms, result 0 00:21:13.162 [2024-06-10 10:10:02.636049] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:13.162 Some configs were skipped because the RPC state that can call them passed over. 00:21:13.420 10:10:02 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:13.420 [2024-06-10 10:10:02.906318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.420 [2024-06-10 10:10:02.906679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:13.420 [2024-06-10 10:10:02.906862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.541 ms 00:21:13.420 [2024-06-10 10:10:02.907005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.420 [2024-06-10 10:10:02.907108] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.335 ms, result 0 00:21:13.420 true 00:21:13.420 10:10:02 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:13.679 [2024-06-10 10:10:03.138337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.679 [2024-06-10 10:10:03.138411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:13.679 [2024-06-10 10:10:03.138436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:21:13.679 [2024-06-10 10:10:03.138449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.679 [2024-06-10 10:10:03.138542] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.485 ms, result 0 00:21:13.679 true 00:21:13.679 10:10:03 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 81038 00:21:13.679 10:10:03 ftl.ftl_trim -- common/autotest_common.sh@949 -- # '[' -z 81038 ']' 00:21:13.679 10:10:03 ftl.ftl_trim -- common/autotest_common.sh@953 -- # kill -0 81038 00:21:13.679 10:10:03 ftl.ftl_trim -- common/autotest_common.sh@954 -- # uname 00:21:13.679 10:10:03 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:13.679 10:10:03 ftl.ftl_trim -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 81038 00:21:13.679 killing process with pid 81038 00:21:13.679 10:10:03 ftl.ftl_trim -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:13.679 10:10:03 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:13.679 10:10:03 ftl.ftl_trim -- common/autotest_common.sh@967 -- # echo 'killing process with pid 81038' 00:21:13.679 10:10:03 ftl.ftl_trim -- common/autotest_common.sh@968 -- # kill 81038 00:21:13.679 10:10:03 ftl.ftl_trim -- common/autotest_common.sh@973 -- # wait 81038 00:21:14.615 [2024-06-10 10:10:04.083174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.615 [2024-06-10 10:10:04.083253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:14.615 [2024-06-10 10:10:04.083292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:14.615 [2024-06-10 10:10:04.083306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.615 [2024-06-10 10:10:04.083338] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:14.615 [2024-06-10 10:10:04.086386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.615 [2024-06-10 10:10:04.086421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:14.615 [2024-06-10 10:10:04.086458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.021 ms 00:21:14.615 [2024-06-10 10:10:04.086468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.615 [2024-06-10 10:10:04.086800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.615 [2024-06-10 10:10:04.086821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:14.615 [2024-06-10 10:10:04.086836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:21:14.615 [2024-06-10 10:10:04.086848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.615 [2024-06-10 10:10:04.090766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.615 [2024-06-10 10:10:04.090809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:14.615 [2024-06-10 10:10:04.090829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.890 ms 00:21:14.615 [2024-06-10 10:10:04.090845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.615 [2024-06-10 10:10:04.098579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.615 [2024-06-10 10:10:04.098667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:14.615 [2024-06-10 10:10:04.098692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.683 ms 00:21:14.615 [2024-06-10 10:10:04.098705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.615 [2024-06-10 10:10:04.111296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.615 [2024-06-10 10:10:04.111389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:14.615 [2024-06-10 10:10:04.111430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.483 ms 00:21:14.615 [2024-06-10 10:10:04.111443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.615 [2024-06-10 10:10:04.119937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.615 [2024-06-10 10:10:04.119983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:14.615 [2024-06-10 10:10:04.120020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.380 ms 00:21:14.615 [2024-06-10 10:10:04.120052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.615 [2024-06-10 10:10:04.120208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.616 [2024-06-10 10:10:04.120228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:14.616 [2024-06-10 10:10:04.120243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:21:14.616 [2024-06-10 10:10:04.120254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.876 [2024-06-10 10:10:04.133364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.876 [2024-06-10 10:10:04.133412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:14.876 [2024-06-10 10:10:04.133447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.082 ms 00:21:14.876 [2024-06-10 10:10:04.133458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.876 [2024-06-10 10:10:04.145650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.876 [2024-06-10 10:10:04.145696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:14.876 [2024-06-10 10:10:04.145731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.132 ms 00:21:14.876 [2024-06-10 10:10:04.145758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.876 [2024-06-10 10:10:04.157316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.876 [2024-06-10 10:10:04.157350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:14.876 [2024-06-10 10:10:04.157384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.507 ms 00:21:14.876 [2024-06-10 10:10:04.157395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.876 [2024-06-10 10:10:04.168541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.876 [2024-06-10 10:10:04.168580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:14.876 [2024-06-10 10:10:04.168615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.076 ms 00:21:14.876 [2024-06-10 10:10:04.168627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.876 [2024-06-10 10:10:04.168721] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:14.876 [2024-06-10 10:10:04.168777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.168798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.168811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.168825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.168837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.168850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.168862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.168877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.168889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.168918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.168931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.168945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.168960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.168974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.168987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:14.876 [2024-06-10 10:10:04.169338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.169991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.170003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.170026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.170038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.170053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.170065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.170079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.170091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.170106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.170119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.170133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.170145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.170160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:14.877 [2024-06-10 10:10:04.170182] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:14.877 [2024-06-10 10:10:04.170196] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4e93524f-9e0d-42fe-9154-f58916c65969 00:21:14.877 [2024-06-10 10:10:04.170211] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:14.877 [2024-06-10 10:10:04.170227] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:14.877 [2024-06-10 10:10:04.170238] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:14.877 [2024-06-10 10:10:04.170252] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:14.877 [2024-06-10 10:10:04.170263] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:14.877 [2024-06-10 10:10:04.170277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:14.877 [2024-06-10 10:10:04.170304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:14.877 [2024-06-10 10:10:04.170316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:14.877 [2024-06-10 10:10:04.170327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:14.877 [2024-06-10 10:10:04.170341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.877 [2024-06-10 10:10:04.170363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:14.877 [2024-06-10 10:10:04.170378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.625 ms 00:21:14.877 [2024-06-10 10:10:04.170390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.877 [2024-06-10 10:10:04.187729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.877 [2024-06-10 10:10:04.187789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:14.877 [2024-06-10 10:10:04.187826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.292 ms 00:21:14.877 [2024-06-10 10:10:04.187839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.877 [2024-06-10 10:10:04.188363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.877 [2024-06-10 10:10:04.188386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:14.877 [2024-06-10 10:10:04.188402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:21:14.877 [2024-06-10 10:10:04.188414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.877 [2024-06-10 10:10:04.243260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.877 [2024-06-10 10:10:04.243324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:14.877 [2024-06-10 10:10:04.243363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.877 [2024-06-10 10:10:04.243376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.877 [2024-06-10 10:10:04.243557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.877 [2024-06-10 10:10:04.243576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:14.877 [2024-06-10 10:10:04.243590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.877 [2024-06-10 10:10:04.243602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.877 [2024-06-10 10:10:04.243675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.877 [2024-06-10 10:10:04.243694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:14.877 [2024-06-10 10:10:04.243748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.877 [2024-06-10 10:10:04.243761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.877 [2024-06-10 10:10:04.243794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.878 [2024-06-10 10:10:04.243810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:14.878 [2024-06-10 10:10:04.243840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.878 [2024-06-10 10:10:04.243852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.878 [2024-06-10 10:10:04.337410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.878 [2024-06-10 10:10:04.337486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:14.878 [2024-06-10 10:10:04.337524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.878 [2024-06-10 10:10:04.337536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.137 [2024-06-10 10:10:04.417560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.137 [2024-06-10 10:10:04.417618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:15.137 [2024-06-10 10:10:04.417670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.137 [2024-06-10 10:10:04.417700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.137 [2024-06-10 10:10:04.417826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.137 [2024-06-10 10:10:04.417845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:15.137 [2024-06-10 10:10:04.417860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.137 [2024-06-10 10:10:04.417871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.137 [2024-06-10 10:10:04.417912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.137 [2024-06-10 10:10:04.417927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:15.137 [2024-06-10 10:10:04.417940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.137 [2024-06-10 10:10:04.417951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.137 [2024-06-10 10:10:04.418119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.137 [2024-06-10 10:10:04.418140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:15.137 [2024-06-10 10:10:04.418155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.137 [2024-06-10 10:10:04.418167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.137 [2024-06-10 10:10:04.418237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.137 [2024-06-10 10:10:04.418255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:15.137 [2024-06-10 10:10:04.418269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.137 [2024-06-10 10:10:04.418280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.137 [2024-06-10 10:10:04.418329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.137 [2024-06-10 10:10:04.418352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:15.138 [2024-06-10 10:10:04.418370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.138 [2024-06-10 10:10:04.418381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.138 [2024-06-10 10:10:04.418438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.138 [2024-06-10 10:10:04.418455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:15.138 [2024-06-10 10:10:04.418469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.138 [2024-06-10 10:10:04.418480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.138 [2024-06-10 10:10:04.418634] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.465 ms, result 0 00:21:16.113 10:10:05 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:16.113 [2024-06-10 10:10:05.424194] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:21:16.113 [2024-06-10 10:10:05.424370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81091 ] 00:21:16.113 [2024-06-10 10:10:05.597245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.372 [2024-06-10 10:10:05.792915] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.630 [2024-06-10 10:10:06.111355] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:16.630 [2024-06-10 10:10:06.111436] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:16.891 [2024-06-10 10:10:06.266523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.891 [2024-06-10 10:10:06.266632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:16.891 [2024-06-10 10:10:06.266709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:16.891 [2024-06-10 10:10:06.266722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.891 [2024-06-10 10:10:06.270080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.891 [2024-06-10 10:10:06.270191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:16.891 [2024-06-10 10:10:06.270227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.321 ms 00:21:16.891 [2024-06-10 10:10:06.270238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.891 [2024-06-10 10:10:06.270542] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:16.891 [2024-06-10 10:10:06.271683] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:16.891 [2024-06-10 10:10:06.271744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.891 [2024-06-10 10:10:06.271769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:16.891 [2024-06-10 10:10:06.271797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:21:16.891 [2024-06-10 10:10:06.271809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.891 [2024-06-10 10:10:06.273360] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:16.891 [2024-06-10 10:10:06.291269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.891 [2024-06-10 10:10:06.291367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:16.891 [2024-06-10 10:10:06.291390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.903 ms 00:21:16.891 [2024-06-10 10:10:06.291414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.891 [2024-06-10 10:10:06.291707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.891 [2024-06-10 10:10:06.291732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:16.891 [2024-06-10 10:10:06.291746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:16.891 [2024-06-10 10:10:06.291773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.891 [2024-06-10 10:10:06.297490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.891 [2024-06-10 10:10:06.297562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:16.891 [2024-06-10 10:10:06.297606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.634 ms 00:21:16.891 [2024-06-10 10:10:06.297618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.891 [2024-06-10 10:10:06.297842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.891 [2024-06-10 10:10:06.297867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:16.891 [2024-06-10 10:10:06.297881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:16.891 [2024-06-10 10:10:06.297892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.891 [2024-06-10 10:10:06.297958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.891 [2024-06-10 10:10:06.297992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:16.891 [2024-06-10 10:10:06.298005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:16.891 [2024-06-10 10:10:06.298021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.891 [2024-06-10 10:10:06.298061] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:16.891 [2024-06-10 10:10:06.302690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.891 [2024-06-10 10:10:06.302765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:16.891 [2024-06-10 10:10:06.302799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.640 ms 00:21:16.891 [2024-06-10 10:10:06.302810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.891 [2024-06-10 10:10:06.302932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.891 [2024-06-10 10:10:06.302952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:16.891 [2024-06-10 10:10:06.302965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:16.892 [2024-06-10 10:10:06.302975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.892 [2024-06-10 10:10:06.303006] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:16.892 [2024-06-10 10:10:06.303035] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:16.892 [2024-06-10 10:10:06.303098] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:16.892 [2024-06-10 10:10:06.303119] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:16.892 [2024-06-10 10:10:06.303240] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:16.892 [2024-06-10 10:10:06.303257] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:16.892 [2024-06-10 10:10:06.303271] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:16.892 [2024-06-10 10:10:06.303286] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:16.892 [2024-06-10 10:10:06.303301] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:16.892 [2024-06-10 10:10:06.303313] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:16.892 [2024-06-10 10:10:06.303324] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:16.892 [2024-06-10 10:10:06.303339] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:16.892 [2024-06-10 10:10:06.303350] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:16.892 [2024-06-10 10:10:06.303362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.892 [2024-06-10 10:10:06.303374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:16.892 [2024-06-10 10:10:06.303386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:21:16.892 [2024-06-10 10:10:06.303397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.892 [2024-06-10 10:10:06.303494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.892 [2024-06-10 10:10:06.303510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:16.892 [2024-06-10 10:10:06.303522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:16.892 [2024-06-10 10:10:06.303533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.892 [2024-06-10 10:10:06.303660] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:16.892 [2024-06-10 10:10:06.303680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:16.892 [2024-06-10 10:10:06.303693] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:16.892 [2024-06-10 10:10:06.303705] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.892 [2024-06-10 10:10:06.303716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:16.892 [2024-06-10 10:10:06.303727] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:16.892 [2024-06-10 10:10:06.303737] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:16.892 [2024-06-10 10:10:06.303748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:16.892 [2024-06-10 10:10:06.303758] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:16.892 [2024-06-10 10:10:06.303768] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:16.892 [2024-06-10 10:10:06.303779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:16.892 [2024-06-10 10:10:06.303789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:16.892 [2024-06-10 10:10:06.303799] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:16.892 [2024-06-10 10:10:06.303809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:16.892 [2024-06-10 10:10:06.303821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:16.892 [2024-06-10 10:10:06.303831] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.892 [2024-06-10 10:10:06.303841] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:16.892 [2024-06-10 10:10:06.303851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:16.892 [2024-06-10 10:10:06.303861] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.892 [2024-06-10 10:10:06.303872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:16.892 [2024-06-10 10:10:06.303898] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:16.892 [2024-06-10 10:10:06.303908] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:16.892 [2024-06-10 10:10:06.303919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:16.892 [2024-06-10 10:10:06.303929] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:16.892 [2024-06-10 10:10:06.303939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:16.892 [2024-06-10 10:10:06.303948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:16.892 [2024-06-10 10:10:06.303958] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:16.892 [2024-06-10 10:10:06.303977] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:16.892 [2024-06-10 10:10:06.303987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:16.892 [2024-06-10 10:10:06.303997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:16.892 [2024-06-10 10:10:06.304007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:16.892 [2024-06-10 10:10:06.304017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:16.892 [2024-06-10 10:10:06.304027] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:16.892 [2024-06-10 10:10:06.304037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:16.892 [2024-06-10 10:10:06.304047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:16.892 [2024-06-10 10:10:06.304057] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:16.892 [2024-06-10 10:10:06.304067] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:16.892 [2024-06-10 10:10:06.304077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:16.892 [2024-06-10 10:10:06.304088] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:16.892 [2024-06-10 10:10:06.304097] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.892 [2024-06-10 10:10:06.304107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:16.892 [2024-06-10 10:10:06.304117] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:16.892 [2024-06-10 10:10:06.304127] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.892 [2024-06-10 10:10:06.304138] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:16.892 [2024-06-10 10:10:06.304149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:16.892 [2024-06-10 10:10:06.304160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:16.892 [2024-06-10 10:10:06.304174] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.892 [2024-06-10 10:10:06.304185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:16.892 [2024-06-10 10:10:06.304196] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:16.892 [2024-06-10 10:10:06.304206] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:16.892 [2024-06-10 10:10:06.304216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:16.892 [2024-06-10 10:10:06.304226] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:16.892 [2024-06-10 10:10:06.304236] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:16.892 [2024-06-10 10:10:06.304248] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:16.892 [2024-06-10 10:10:06.304262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:16.892 [2024-06-10 10:10:06.304280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:16.892 [2024-06-10 10:10:06.304292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:16.892 [2024-06-10 10:10:06.304303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:16.892 [2024-06-10 10:10:06.304314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:16.892 [2024-06-10 10:10:06.304325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:16.892 [2024-06-10 10:10:06.304336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:16.892 [2024-06-10 10:10:06.304347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:16.892 [2024-06-10 10:10:06.304358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:16.892 [2024-06-10 10:10:06.304369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:16.892 [2024-06-10 10:10:06.304380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:16.892 [2024-06-10 10:10:06.304391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:16.892 [2024-06-10 10:10:06.304402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:16.892 [2024-06-10 10:10:06.304413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:16.892 [2024-06-10 10:10:06.304425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:16.892 [2024-06-10 10:10:06.304436] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:16.892 [2024-06-10 10:10:06.304449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:16.892 [2024-06-10 10:10:06.304461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:16.892 [2024-06-10 10:10:06.304472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:16.892 [2024-06-10 10:10:06.304484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:16.893 [2024-06-10 10:10:06.304495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:16.893 [2024-06-10 10:10:06.304507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.893 [2024-06-10 10:10:06.304518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:16.893 [2024-06-10 10:10:06.304530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:21:16.893 [2024-06-10 10:10:06.304542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.893 [2024-06-10 10:10:06.350459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.893 [2024-06-10 10:10:06.350530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:16.893 [2024-06-10 10:10:06.350568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.840 ms 00:21:16.893 [2024-06-10 10:10:06.350578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.893 [2024-06-10 10:10:06.350830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.893 [2024-06-10 10:10:06.350853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:16.893 [2024-06-10 10:10:06.350874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:21:16.893 [2024-06-10 10:10:06.350888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.893 [2024-06-10 10:10:06.388712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.893 [2024-06-10 10:10:06.388832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:16.893 [2024-06-10 10:10:06.388868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.790 ms 00:21:16.893 [2024-06-10 10:10:06.388879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.893 [2024-06-10 10:10:06.389060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.893 [2024-06-10 10:10:06.389085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:16.893 [2024-06-10 10:10:06.389099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:16.893 [2024-06-10 10:10:06.389110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.893 [2024-06-10 10:10:06.389484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.893 [2024-06-10 10:10:06.389510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:16.893 [2024-06-10 10:10:06.389524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:21:16.893 [2024-06-10 10:10:06.389535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.893 [2024-06-10 10:10:06.389706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.893 [2024-06-10 10:10:06.389726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:16.893 [2024-06-10 10:10:06.389787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:21:16.893 [2024-06-10 10:10:06.389801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.152 [2024-06-10 10:10:06.407981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.152 [2024-06-10 10:10:06.408039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:17.152 [2024-06-10 10:10:06.408071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.150 ms 00:21:17.152 [2024-06-10 10:10:06.408081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.152 [2024-06-10 10:10:06.425615] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:17.152 [2024-06-10 10:10:06.425672] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:17.152 [2024-06-10 10:10:06.425693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.152 [2024-06-10 10:10:06.425705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:17.152 [2024-06-10 10:10:06.425719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.465 ms 00:21:17.152 [2024-06-10 10:10:06.425730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.152 [2024-06-10 10:10:06.456045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.152 [2024-06-10 10:10:06.456086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:17.153 [2024-06-10 10:10:06.456119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.205 ms 00:21:17.153 [2024-06-10 10:10:06.456130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.153 [2024-06-10 10:10:06.470771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.153 [2024-06-10 10:10:06.470825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:17.153 [2024-06-10 10:10:06.470870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.554 ms 00:21:17.153 [2024-06-10 10:10:06.470879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.153 [2024-06-10 10:10:06.485253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.153 [2024-06-10 10:10:06.485301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:17.153 [2024-06-10 10:10:06.485332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.294 ms 00:21:17.153 [2024-06-10 10:10:06.485341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.153 [2024-06-10 10:10:06.486093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.153 [2024-06-10 10:10:06.486143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:17.153 [2024-06-10 10:10:06.486178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:21:17.153 [2024-06-10 10:10:06.486189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.153 [2024-06-10 10:10:06.548854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.153 [2024-06-10 10:10:06.548922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:17.153 [2024-06-10 10:10:06.548962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.633 ms 00:21:17.153 [2024-06-10 10:10:06.548973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.153 [2024-06-10 10:10:06.559787] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:17.153 [2024-06-10 10:10:06.572524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.153 [2024-06-10 10:10:06.572590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:17.153 [2024-06-10 10:10:06.572625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.409 ms 00:21:17.153 [2024-06-10 10:10:06.572635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.153 [2024-06-10 10:10:06.572822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.153 [2024-06-10 10:10:06.572845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:17.153 [2024-06-10 10:10:06.572862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:17.153 [2024-06-10 10:10:06.572873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.153 [2024-06-10 10:10:06.572939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.153 [2024-06-10 10:10:06.572955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:17.153 [2024-06-10 10:10:06.572966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:17.153 [2024-06-10 10:10:06.572976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.153 [2024-06-10 10:10:06.573007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.153 [2024-06-10 10:10:06.573020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:17.153 [2024-06-10 10:10:06.573031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:17.153 [2024-06-10 10:10:06.573041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.153 [2024-06-10 10:10:06.573097] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:17.153 [2024-06-10 10:10:06.573127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.153 [2024-06-10 10:10:06.573154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:17.153 [2024-06-10 10:10:06.573165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:17.153 [2024-06-10 10:10:06.573175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.153 [2024-06-10 10:10:06.602351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.153 [2024-06-10 10:10:06.602460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:17.153 [2024-06-10 10:10:06.602515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.140 ms 00:21:17.153 [2024-06-10 10:10:06.602527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.153 [2024-06-10 10:10:06.602767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.153 [2024-06-10 10:10:06.602790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:17.153 [2024-06-10 10:10:06.602804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:17.153 [2024-06-10 10:10:06.602816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.153 [2024-06-10 10:10:06.603926] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:17.153 [2024-06-10 10:10:06.608516] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 337.055 ms, result 0 00:21:17.153 [2024-06-10 10:10:06.609553] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:17.153 [2024-06-10 10:10:06.626357] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:27.960  Copying: 25/256 [MB] (25 MBps) Copying: 49/256 [MB] (23 MBps) Copying: 73/256 [MB] (23 MBps) Copying: 98/256 [MB] (24 MBps) Copying: 121/256 [MB] (23 MBps) Copying: 146/256 [MB] (24 MBps) Copying: 171/256 [MB] (25 MBps) Copying: 197/256 [MB] (25 MBps) Copying: 222/256 [MB] (25 MBps) Copying: 248/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-06-10 10:10:17.308801] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:27.960 [2024-06-10 10:10:17.325250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.960 [2024-06-10 10:10:17.325342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:27.960 [2024-06-10 10:10:17.325367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:27.960 [2024-06-10 10:10:17.325382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.960 [2024-06-10 10:10:17.325422] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:27.960 [2024-06-10 10:10:17.329444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.960 [2024-06-10 10:10:17.329485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:27.960 [2024-06-10 10:10:17.329516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.984 ms 00:21:27.960 [2024-06-10 10:10:17.329530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.960 [2024-06-10 10:10:17.330624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.960 [2024-06-10 10:10:17.330677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:27.960 [2024-06-10 10:10:17.330696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:21:27.960 [2024-06-10 10:10:17.330709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.960 [2024-06-10 10:10:17.335624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.960 [2024-06-10 10:10:17.335668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:27.960 [2024-06-10 10:10:17.335685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.887 ms 00:21:27.960 [2024-06-10 10:10:17.335699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.960 [2024-06-10 10:10:17.344955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.960 [2024-06-10 10:10:17.344995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:27.960 [2024-06-10 10:10:17.345014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.188 ms 00:21:27.960 [2024-06-10 10:10:17.345026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.960 [2024-06-10 10:10:17.383889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.960 [2024-06-10 10:10:17.383986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:27.960 [2024-06-10 10:10:17.384010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.793 ms 00:21:27.960 [2024-06-10 10:10:17.384024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.960 [2024-06-10 10:10:17.404954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.960 [2024-06-10 10:10:17.405025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:27.960 [2024-06-10 10:10:17.405049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.802 ms 00:21:27.960 [2024-06-10 10:10:17.405063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.960 [2024-06-10 10:10:17.405303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.960 [2024-06-10 10:10:17.405332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:27.960 [2024-06-10 10:10:17.405348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:21:27.960 [2024-06-10 10:10:17.405362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.960 [2024-06-10 10:10:17.443988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.960 [2024-06-10 10:10:17.444060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:27.961 [2024-06-10 10:10:17.444082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.596 ms 00:21:27.961 [2024-06-10 10:10:17.444096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.220 [2024-06-10 10:10:17.482479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.220 [2024-06-10 10:10:17.482573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:28.220 [2024-06-10 10:10:17.482597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.289 ms 00:21:28.220 [2024-06-10 10:10:17.482610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.220 [2024-06-10 10:10:17.520745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.220 [2024-06-10 10:10:17.520817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:28.220 [2024-06-10 10:10:17.520842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.008 ms 00:21:28.220 [2024-06-10 10:10:17.520855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.220 [2024-06-10 10:10:17.558976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.220 [2024-06-10 10:10:17.559072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:28.220 [2024-06-10 10:10:17.559096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.972 ms 00:21:28.220 [2024-06-10 10:10:17.559111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.220 [2024-06-10 10:10:17.559266] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:28.220 [2024-06-10 10:10:17.559297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:28.220 [2024-06-10 10:10:17.559994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:28.221 [2024-06-10 10:10:17.560793] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:28.221 [2024-06-10 10:10:17.560816] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4e93524f-9e0d-42fe-9154-f58916c65969 00:21:28.221 [2024-06-10 10:10:17.560830] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:28.221 [2024-06-10 10:10:17.560843] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:28.221 [2024-06-10 10:10:17.560856] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:28.221 [2024-06-10 10:10:17.560869] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:28.221 [2024-06-10 10:10:17.560898] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:28.221 [2024-06-10 10:10:17.560912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:28.221 [2024-06-10 10:10:17.560925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:28.221 [2024-06-10 10:10:17.560937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:28.221 [2024-06-10 10:10:17.560949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:28.221 [2024-06-10 10:10:17.560963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.221 [2024-06-10 10:10:17.560976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:28.221 [2024-06-10 10:10:17.560991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.700 ms 00:21:28.221 [2024-06-10 10:10:17.561004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.221 [2024-06-10 10:10:17.581179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.221 [2024-06-10 10:10:17.581249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:28.221 [2024-06-10 10:10:17.581271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.132 ms 00:21:28.221 [2024-06-10 10:10:17.581284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.221 [2024-06-10 10:10:17.581888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.221 [2024-06-10 10:10:17.581916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:28.221 [2024-06-10 10:10:17.581932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:21:28.221 [2024-06-10 10:10:17.581954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.221 [2024-06-10 10:10:17.630121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.221 [2024-06-10 10:10:17.630196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:28.221 [2024-06-10 10:10:17.630217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.221 [2024-06-10 10:10:17.630231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.221 [2024-06-10 10:10:17.630370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.221 [2024-06-10 10:10:17.630389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:28.221 [2024-06-10 10:10:17.630404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.221 [2024-06-10 10:10:17.630426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.221 [2024-06-10 10:10:17.630513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.221 [2024-06-10 10:10:17.630536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:28.221 [2024-06-10 10:10:17.630550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.221 [2024-06-10 10:10:17.630563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.221 [2024-06-10 10:10:17.630593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.221 [2024-06-10 10:10:17.630609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:28.221 [2024-06-10 10:10:17.630622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.221 [2024-06-10 10:10:17.630635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.480 [2024-06-10 10:10:17.738686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.480 [2024-06-10 10:10:17.738774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:28.480 [2024-06-10 10:10:17.738793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.480 [2024-06-10 10:10:17.738804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.480 [2024-06-10 10:10:17.824483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.480 [2024-06-10 10:10:17.824559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:28.480 [2024-06-10 10:10:17.824580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.480 [2024-06-10 10:10:17.824605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.480 [2024-06-10 10:10:17.824718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.480 [2024-06-10 10:10:17.824739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:28.480 [2024-06-10 10:10:17.824751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.480 [2024-06-10 10:10:17.824763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.480 [2024-06-10 10:10:17.824798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.480 [2024-06-10 10:10:17.824811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:28.480 [2024-06-10 10:10:17.824822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.480 [2024-06-10 10:10:17.824833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.480 [2024-06-10 10:10:17.824960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.480 [2024-06-10 10:10:17.824978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:28.480 [2024-06-10 10:10:17.824991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.480 [2024-06-10 10:10:17.825002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.480 [2024-06-10 10:10:17.825058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.480 [2024-06-10 10:10:17.825076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:28.480 [2024-06-10 10:10:17.825088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.480 [2024-06-10 10:10:17.825099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.480 [2024-06-10 10:10:17.825151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.480 [2024-06-10 10:10:17.825167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:28.480 [2024-06-10 10:10:17.825179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.480 [2024-06-10 10:10:17.825190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.480 [2024-06-10 10:10:17.825243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.480 [2024-06-10 10:10:17.825259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:28.480 [2024-06-10 10:10:17.825271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.480 [2024-06-10 10:10:17.825281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.480 [2024-06-10 10:10:17.825449] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 500.225 ms, result 0 00:21:29.415 00:21:29.415 00:21:29.415 10:10:18 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:30.001 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:30.001 10:10:19 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:30.001 10:10:19 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:30.001 10:10:19 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:30.001 10:10:19 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:30.001 10:10:19 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:30.001 10:10:19 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:30.327 10:10:19 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 81038 00:21:30.327 10:10:19 ftl.ftl_trim -- common/autotest_common.sh@949 -- # '[' -z 81038 ']' 00:21:30.327 10:10:19 ftl.ftl_trim -- common/autotest_common.sh@953 -- # kill -0 81038 00:21:30.327 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 953: kill: (81038) - No such process 00:21:30.327 Process with pid 81038 is not found 00:21:30.327 10:10:19 ftl.ftl_trim -- common/autotest_common.sh@976 -- # echo 'Process with pid 81038 is not found' 00:21:30.327 ************************************ 00:21:30.327 END TEST ftl_trim 00:21:30.327 ************************************ 00:21:30.327 00:21:30.327 real 1m9.482s 00:21:30.327 user 1m34.904s 00:21:30.327 sys 0m6.417s 00:21:30.327 10:10:19 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:30.327 10:10:19 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:30.327 10:10:19 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:30.327 10:10:19 ftl -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:21:30.327 10:10:19 ftl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:30.327 10:10:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:30.327 ************************************ 00:21:30.327 START TEST ftl_restore 00:21:30.327 ************************************ 00:21:30.327 10:10:19 ftl.ftl_restore -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:30.327 * Looking for test storage... 00:21:30.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.liHH6w07QI 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=81295 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:30.327 10:10:19 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 81295 00:21:30.327 10:10:19 ftl.ftl_restore -- common/autotest_common.sh@830 -- # '[' -z 81295 ']' 00:21:30.327 10:10:19 ftl.ftl_restore -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.327 10:10:19 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:30.327 10:10:19 ftl.ftl_restore -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.327 10:10:19 ftl.ftl_restore -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:30.327 10:10:19 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:30.327 [2024-06-10 10:10:19.785555] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:21:30.327 [2024-06-10 10:10:19.785971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81295 ] 00:21:30.586 [2024-06-10 10:10:19.948983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.844 [2024-06-10 10:10:20.181499] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.410 10:10:20 ftl.ftl_restore -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:31.410 10:10:20 ftl.ftl_restore -- common/autotest_common.sh@863 -- # return 0 00:21:31.410 10:10:20 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:31.410 10:10:20 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:31.410 10:10:20 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:31.410 10:10:20 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:31.410 10:10:20 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:31.410 10:10:20 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:31.977 10:10:21 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:31.977 10:10:21 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:31.977 10:10:21 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:31.977 10:10:21 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local bdev_name=nvme0n1 00:21:31.977 10:10:21 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_info 00:21:31.977 10:10:21 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bs 00:21:31.977 10:10:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local nb 00:21:31.977 10:10:21 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:31.977 10:10:21 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:21:31.977 { 00:21:31.977 "name": "nvme0n1", 00:21:31.977 "aliases": [ 00:21:31.977 "c8b503be-5f6d-4608-88ef-1ff4cebf7b9f" 00:21:31.978 ], 00:21:31.978 "product_name": "NVMe disk", 00:21:31.978 "block_size": 4096, 00:21:31.978 "num_blocks": 1310720, 00:21:31.978 "uuid": "c8b503be-5f6d-4608-88ef-1ff4cebf7b9f", 00:21:31.978 "assigned_rate_limits": { 00:21:31.978 "rw_ios_per_sec": 0, 00:21:31.978 "rw_mbytes_per_sec": 0, 00:21:31.978 "r_mbytes_per_sec": 0, 00:21:31.978 "w_mbytes_per_sec": 0 00:21:31.978 }, 00:21:31.978 "claimed": true, 00:21:31.978 "claim_type": "read_many_write_one", 00:21:31.978 "zoned": false, 00:21:31.978 "supported_io_types": { 00:21:31.978 "read": true, 00:21:31.978 "write": true, 00:21:31.978 "unmap": true, 00:21:31.978 "write_zeroes": true, 00:21:31.978 "flush": true, 00:21:31.978 "reset": true, 00:21:31.978 "compare": true, 00:21:31.978 "compare_and_write": false, 00:21:31.978 "abort": true, 00:21:31.978 "nvme_admin": true, 00:21:31.978 "nvme_io": true 00:21:31.978 }, 00:21:31.978 "driver_specific": { 00:21:31.978 "nvme": [ 00:21:31.978 { 00:21:31.978 "pci_address": "0000:00:11.0", 00:21:31.978 "trid": { 00:21:31.978 "trtype": "PCIe", 00:21:31.978 "traddr": "0000:00:11.0" 00:21:31.978 }, 00:21:31.978 "ctrlr_data": { 00:21:31.978 "cntlid": 0, 00:21:31.978 "vendor_id": "0x1b36", 00:21:31.978 "model_number": "QEMU NVMe Ctrl", 00:21:31.978 "serial_number": "12341", 00:21:31.978 "firmware_revision": "8.0.0", 00:21:31.978 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:31.978 "oacs": { 00:21:31.978 "security": 0, 00:21:31.978 "format": 1, 00:21:31.978 "firmware": 0, 00:21:31.978 "ns_manage": 1 00:21:31.978 }, 00:21:31.978 "multi_ctrlr": false, 00:21:31.978 "ana_reporting": false 00:21:31.978 }, 00:21:31.978 "vs": { 00:21:31.978 "nvme_version": "1.4" 00:21:31.978 }, 00:21:31.978 "ns_data": { 00:21:31.978 "id": 1, 00:21:31.978 "can_share": false 00:21:31.978 } 00:21:31.978 } 00:21:31.978 ], 00:21:31.978 "mp_policy": "active_passive" 00:21:31.978 } 00:21:31.978 } 00:21:31.978 ]' 00:21:32.235 10:10:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:21:32.235 10:10:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bs=4096 00:21:32.235 10:10:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:21:32.235 10:10:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # nb=1310720 00:21:32.235 10:10:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_size=5120 00:21:32.235 10:10:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # echo 5120 00:21:32.235 10:10:21 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:32.235 10:10:21 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:32.235 10:10:21 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:32.235 10:10:21 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:32.235 10:10:21 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:32.493 10:10:21 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=fbc15256-e89f-4972-b87d-64f005cc9399 00:21:32.493 10:10:21 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:32.493 10:10:21 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fbc15256-e89f-4972-b87d-64f005cc9399 00:21:32.751 10:10:22 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:33.010 10:10:22 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=99e489d3-349b-4fbc-9b0a-e3a2f8403a3c 00:21:33.010 10:10:22 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 99e489d3-349b-4fbc-9b0a-e3a2f8403a3c 00:21:33.269 10:10:22 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=958bd930-5d0c-418b-a599-4641138b973d 00:21:33.269 10:10:22 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:33.269 10:10:22 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 958bd930-5d0c-418b-a599-4641138b973d 00:21:33.269 10:10:22 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:33.269 10:10:22 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:33.269 10:10:22 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=958bd930-5d0c-418b-a599-4641138b973d 00:21:33.269 10:10:22 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:33.269 10:10:22 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 958bd930-5d0c-418b-a599-4641138b973d 00:21:33.269 10:10:22 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local bdev_name=958bd930-5d0c-418b-a599-4641138b973d 00:21:33.269 10:10:22 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_info 00:21:33.269 10:10:22 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bs 00:21:33.269 10:10:22 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local nb 00:21:33.269 10:10:22 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 958bd930-5d0c-418b-a599-4641138b973d 00:21:33.527 10:10:22 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:21:33.527 { 00:21:33.527 "name": "958bd930-5d0c-418b-a599-4641138b973d", 00:21:33.527 "aliases": [ 00:21:33.527 "lvs/nvme0n1p0" 00:21:33.527 ], 00:21:33.527 "product_name": "Logical Volume", 00:21:33.527 "block_size": 4096, 00:21:33.527 "num_blocks": 26476544, 00:21:33.527 "uuid": "958bd930-5d0c-418b-a599-4641138b973d", 00:21:33.527 "assigned_rate_limits": { 00:21:33.527 "rw_ios_per_sec": 0, 00:21:33.527 "rw_mbytes_per_sec": 0, 00:21:33.527 "r_mbytes_per_sec": 0, 00:21:33.527 "w_mbytes_per_sec": 0 00:21:33.527 }, 00:21:33.527 "claimed": false, 00:21:33.527 "zoned": false, 00:21:33.527 "supported_io_types": { 00:21:33.527 "read": true, 00:21:33.527 "write": true, 00:21:33.527 "unmap": true, 00:21:33.527 "write_zeroes": true, 00:21:33.527 "flush": false, 00:21:33.527 "reset": true, 00:21:33.527 "compare": false, 00:21:33.527 "compare_and_write": false, 00:21:33.527 "abort": false, 00:21:33.527 "nvme_admin": false, 00:21:33.527 "nvme_io": false 00:21:33.527 }, 00:21:33.527 "driver_specific": { 00:21:33.527 "lvol": { 00:21:33.527 "lvol_store_uuid": "99e489d3-349b-4fbc-9b0a-e3a2f8403a3c", 00:21:33.527 "base_bdev": "nvme0n1", 00:21:33.527 "thin_provision": true, 00:21:33.527 "num_allocated_clusters": 0, 00:21:33.527 "snapshot": false, 00:21:33.527 "clone": false, 00:21:33.527 "esnap_clone": false 00:21:33.527 } 00:21:33.527 } 00:21:33.527 } 00:21:33.527 ]' 00:21:33.527 10:10:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:21:33.527 10:10:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bs=4096 00:21:33.527 10:10:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:21:33.527 10:10:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # nb=26476544 00:21:33.527 10:10:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_size=103424 00:21:33.527 10:10:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # echo 103424 00:21:33.527 10:10:23 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:33.527 10:10:23 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:33.527 10:10:23 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:34.092 10:10:23 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:34.092 10:10:23 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:34.092 10:10:23 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 958bd930-5d0c-418b-a599-4641138b973d 00:21:34.092 10:10:23 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local bdev_name=958bd930-5d0c-418b-a599-4641138b973d 00:21:34.092 10:10:23 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_info 00:21:34.092 10:10:23 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bs 00:21:34.092 10:10:23 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local nb 00:21:34.092 10:10:23 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 958bd930-5d0c-418b-a599-4641138b973d 00:21:34.092 10:10:23 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:21:34.092 { 00:21:34.092 "name": "958bd930-5d0c-418b-a599-4641138b973d", 00:21:34.092 "aliases": [ 00:21:34.092 "lvs/nvme0n1p0" 00:21:34.092 ], 00:21:34.092 "product_name": "Logical Volume", 00:21:34.092 "block_size": 4096, 00:21:34.092 "num_blocks": 26476544, 00:21:34.092 "uuid": "958bd930-5d0c-418b-a599-4641138b973d", 00:21:34.092 "assigned_rate_limits": { 00:21:34.092 "rw_ios_per_sec": 0, 00:21:34.092 "rw_mbytes_per_sec": 0, 00:21:34.092 "r_mbytes_per_sec": 0, 00:21:34.092 "w_mbytes_per_sec": 0 00:21:34.092 }, 00:21:34.092 "claimed": false, 00:21:34.092 "zoned": false, 00:21:34.092 "supported_io_types": { 00:21:34.092 "read": true, 00:21:34.092 "write": true, 00:21:34.092 "unmap": true, 00:21:34.092 "write_zeroes": true, 00:21:34.092 "flush": false, 00:21:34.092 "reset": true, 00:21:34.092 "compare": false, 00:21:34.092 "compare_and_write": false, 00:21:34.092 "abort": false, 00:21:34.092 "nvme_admin": false, 00:21:34.092 "nvme_io": false 00:21:34.092 }, 00:21:34.092 "driver_specific": { 00:21:34.092 "lvol": { 00:21:34.092 "lvol_store_uuid": "99e489d3-349b-4fbc-9b0a-e3a2f8403a3c", 00:21:34.092 "base_bdev": "nvme0n1", 00:21:34.092 "thin_provision": true, 00:21:34.092 "num_allocated_clusters": 0, 00:21:34.092 "snapshot": false, 00:21:34.092 "clone": false, 00:21:34.092 "esnap_clone": false 00:21:34.092 } 00:21:34.092 } 00:21:34.092 } 00:21:34.092 ]' 00:21:34.092 10:10:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:21:34.350 10:10:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bs=4096 00:21:34.350 10:10:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:21:34.350 10:10:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # nb=26476544 00:21:34.350 10:10:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_size=103424 00:21:34.350 10:10:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # echo 103424 00:21:34.350 10:10:23 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:34.350 10:10:23 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:34.609 10:10:24 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:34.609 10:10:24 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 958bd930-5d0c-418b-a599-4641138b973d 00:21:34.609 10:10:24 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local bdev_name=958bd930-5d0c-418b-a599-4641138b973d 00:21:34.609 10:10:24 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_info 00:21:34.609 10:10:24 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bs 00:21:34.609 10:10:24 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local nb 00:21:34.609 10:10:24 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 958bd930-5d0c-418b-a599-4641138b973d 00:21:34.867 10:10:24 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:21:34.867 { 00:21:34.867 "name": "958bd930-5d0c-418b-a599-4641138b973d", 00:21:34.867 "aliases": [ 00:21:34.867 "lvs/nvme0n1p0" 00:21:34.867 ], 00:21:34.867 "product_name": "Logical Volume", 00:21:34.867 "block_size": 4096, 00:21:34.867 "num_blocks": 26476544, 00:21:34.867 "uuid": "958bd930-5d0c-418b-a599-4641138b973d", 00:21:34.867 "assigned_rate_limits": { 00:21:34.867 "rw_ios_per_sec": 0, 00:21:34.867 "rw_mbytes_per_sec": 0, 00:21:34.867 "r_mbytes_per_sec": 0, 00:21:34.867 "w_mbytes_per_sec": 0 00:21:34.867 }, 00:21:34.867 "claimed": false, 00:21:34.867 "zoned": false, 00:21:34.867 "supported_io_types": { 00:21:34.867 "read": true, 00:21:34.867 "write": true, 00:21:34.867 "unmap": true, 00:21:34.867 "write_zeroes": true, 00:21:34.867 "flush": false, 00:21:34.867 "reset": true, 00:21:34.867 "compare": false, 00:21:34.867 "compare_and_write": false, 00:21:34.867 "abort": false, 00:21:34.867 "nvme_admin": false, 00:21:34.867 "nvme_io": false 00:21:34.867 }, 00:21:34.867 "driver_specific": { 00:21:34.867 "lvol": { 00:21:34.867 "lvol_store_uuid": "99e489d3-349b-4fbc-9b0a-e3a2f8403a3c", 00:21:34.867 "base_bdev": "nvme0n1", 00:21:34.867 "thin_provision": true, 00:21:34.867 "num_allocated_clusters": 0, 00:21:34.867 "snapshot": false, 00:21:34.867 "clone": false, 00:21:34.867 "esnap_clone": false 00:21:34.867 } 00:21:34.867 } 00:21:34.867 } 00:21:34.867 ]' 00:21:34.867 10:10:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:21:34.867 10:10:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bs=4096 00:21:34.867 10:10:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:21:35.126 10:10:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # nb=26476544 00:21:35.126 10:10:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_size=103424 00:21:35.126 10:10:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # echo 103424 00:21:35.126 10:10:24 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:35.126 10:10:24 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 958bd930-5d0c-418b-a599-4641138b973d --l2p_dram_limit 10' 00:21:35.126 10:10:24 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:35.126 10:10:24 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:35.126 10:10:24 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:35.126 10:10:24 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:35.126 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:35.126 10:10:24 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 958bd930-5d0c-418b-a599-4641138b973d --l2p_dram_limit 10 -c nvc0n1p0 00:21:35.386 [2024-06-10 10:10:24.645476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.386 [2024-06-10 10:10:24.645545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:35.386 [2024-06-10 10:10:24.645571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:35.386 [2024-06-10 10:10:24.645585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.386 [2024-06-10 10:10:24.645691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.386 [2024-06-10 10:10:24.645712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:35.386 [2024-06-10 10:10:24.645728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:21:35.386 [2024-06-10 10:10:24.645740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.386 [2024-06-10 10:10:24.645775] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:35.386 [2024-06-10 10:10:24.646747] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:35.386 [2024-06-10 10:10:24.646784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.386 [2024-06-10 10:10:24.646798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:35.386 [2024-06-10 10:10:24.646817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:21:35.386 [2024-06-10 10:10:24.646839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.386 [2024-06-10 10:10:24.646978] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4ef5368c-bce2-41c6-87e9-246a186c5c8a 00:21:35.386 [2024-06-10 10:10:24.648067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.386 [2024-06-10 10:10:24.648110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:35.386 [2024-06-10 10:10:24.648143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:35.386 [2024-06-10 10:10:24.648157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.386 [2024-06-10 10:10:24.652898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.386 [2024-06-10 10:10:24.652960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:35.386 [2024-06-10 10:10:24.652982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.669 ms 00:21:35.386 [2024-06-10 10:10:24.652997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.386 [2024-06-10 10:10:24.653126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.386 [2024-06-10 10:10:24.653150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:35.386 [2024-06-10 10:10:24.653164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:21:35.386 [2024-06-10 10:10:24.653178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.386 [2024-06-10 10:10:24.653252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.386 [2024-06-10 10:10:24.653275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:35.386 [2024-06-10 10:10:24.653289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:35.387 [2024-06-10 10:10:24.653303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.387 [2024-06-10 10:10:24.653337] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:35.387 [2024-06-10 10:10:24.658120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.387 [2024-06-10 10:10:24.658176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:35.387 [2024-06-10 10:10:24.658220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.789 ms 00:21:35.387 [2024-06-10 10:10:24.658233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.387 [2024-06-10 10:10:24.658294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.387 [2024-06-10 10:10:24.658325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:35.387 [2024-06-10 10:10:24.658339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:35.387 [2024-06-10 10:10:24.658350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.387 [2024-06-10 10:10:24.658432] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:35.387 [2024-06-10 10:10:24.658596] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:35.387 [2024-06-10 10:10:24.658618] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:35.387 [2024-06-10 10:10:24.658634] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:35.387 [2024-06-10 10:10:24.658654] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:35.387 [2024-06-10 10:10:24.658668] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:35.387 [2024-06-10 10:10:24.658684] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:35.387 [2024-06-10 10:10:24.658912] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:35.387 [2024-06-10 10:10:24.658988] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:35.387 [2024-06-10 10:10:24.659112] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:35.387 [2024-06-10 10:10:24.659188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.387 [2024-06-10 10:10:24.659233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:35.387 [2024-06-10 10:10:24.659360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:21:35.387 [2024-06-10 10:10:24.659483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.387 [2024-06-10 10:10:24.659609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.387 [2024-06-10 10:10:24.659627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:35.387 [2024-06-10 10:10:24.659657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:21:35.387 [2024-06-10 10:10:24.659672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.387 [2024-06-10 10:10:24.659818] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:35.387 [2024-06-10 10:10:24.659835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:35.387 [2024-06-10 10:10:24.659851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.387 [2024-06-10 10:10:24.659863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.387 [2024-06-10 10:10:24.659906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:35.387 [2024-06-10 10:10:24.659933] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:35.387 [2024-06-10 10:10:24.659962] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:35.387 [2024-06-10 10:10:24.659973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:35.387 [2024-06-10 10:10:24.659986] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:35.387 [2024-06-10 10:10:24.659997] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.387 [2024-06-10 10:10:24.660012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:35.387 [2024-06-10 10:10:24.660023] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:35.387 [2024-06-10 10:10:24.660036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.387 [2024-06-10 10:10:24.660047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:35.387 [2024-06-10 10:10:24.660060] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:35.387 [2024-06-10 10:10:24.660070] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.387 [2024-06-10 10:10:24.660083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:35.387 [2024-06-10 10:10:24.660094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:35.387 [2024-06-10 10:10:24.660109] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.387 [2024-06-10 10:10:24.660121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:35.387 [2024-06-10 10:10:24.660134] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:35.387 [2024-06-10 10:10:24.660145] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.387 [2024-06-10 10:10:24.660157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:35.387 [2024-06-10 10:10:24.660168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:35.387 [2024-06-10 10:10:24.660181] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.387 [2024-06-10 10:10:24.660192] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:35.387 [2024-06-10 10:10:24.660204] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:35.387 [2024-06-10 10:10:24.660215] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.387 [2024-06-10 10:10:24.660227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:35.387 [2024-06-10 10:10:24.660238] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:35.387 [2024-06-10 10:10:24.660267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.387 [2024-06-10 10:10:24.660277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:35.387 [2024-06-10 10:10:24.660289] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:35.387 [2024-06-10 10:10:24.660300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.387 [2024-06-10 10:10:24.660346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:35.387 [2024-06-10 10:10:24.660358] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:35.388 [2024-06-10 10:10:24.660373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.388 [2024-06-10 10:10:24.660386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:35.388 [2024-06-10 10:10:24.660400] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:35.388 [2024-06-10 10:10:24.660411] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.388 [2024-06-10 10:10:24.660424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:35.388 [2024-06-10 10:10:24.660436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:35.388 [2024-06-10 10:10:24.660449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.388 [2024-06-10 10:10:24.660460] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:35.388 [2024-06-10 10:10:24.660474] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:35.388 [2024-06-10 10:10:24.660485] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.388 [2024-06-10 10:10:24.660499] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.388 [2024-06-10 10:10:24.660511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:35.388 [2024-06-10 10:10:24.660524] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:35.388 [2024-06-10 10:10:24.660535] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:35.388 [2024-06-10 10:10:24.660551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:35.388 [2024-06-10 10:10:24.660562] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:35.388 [2024-06-10 10:10:24.660575] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:35.388 [2024-06-10 10:10:24.660592] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:35.388 [2024-06-10 10:10:24.660610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.388 [2024-06-10 10:10:24.660623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:35.388 [2024-06-10 10:10:24.660637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:35.388 [2024-06-10 10:10:24.660649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:35.388 [2024-06-10 10:10:24.660663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:35.388 [2024-06-10 10:10:24.660675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:35.388 [2024-06-10 10:10:24.660689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:35.388 [2024-06-10 10:10:24.660701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:35.388 [2024-06-10 10:10:24.660717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:35.388 [2024-06-10 10:10:24.660729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:35.388 [2024-06-10 10:10:24.660743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:35.388 [2024-06-10 10:10:24.660768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:35.388 [2024-06-10 10:10:24.660788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:35.388 [2024-06-10 10:10:24.660801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:35.388 [2024-06-10 10:10:24.660816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:35.388 [2024-06-10 10:10:24.660828] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:35.388 [2024-06-10 10:10:24.660847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.388 [2024-06-10 10:10:24.660861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:35.388 [2024-06-10 10:10:24.660875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:35.388 [2024-06-10 10:10:24.660888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:35.388 [2024-06-10 10:10:24.660902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:35.388 [2024-06-10 10:10:24.660915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.388 [2024-06-10 10:10:24.660929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:35.388 [2024-06-10 10:10:24.660942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.181 ms 00:21:35.388 [2024-06-10 10:10:24.660956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.388 [2024-06-10 10:10:24.661010] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:35.388 [2024-06-10 10:10:24.661030] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:37.285 [2024-06-10 10:10:26.786644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.285 [2024-06-10 10:10:26.786882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:37.285 [2024-06-10 10:10:26.787012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2125.645 ms 00:21:37.285 [2024-06-10 10:10:26.787071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.543 [2024-06-10 10:10:26.819935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.543 [2024-06-10 10:10:26.820151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:37.543 [2024-06-10 10:10:26.820280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.417 ms 00:21:37.543 [2024-06-10 10:10:26.820338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.543 [2024-06-10 10:10:26.820687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.543 [2024-06-10 10:10:26.820838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:37.543 [2024-06-10 10:10:26.820959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:21:37.543 [2024-06-10 10:10:26.821101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.543 [2024-06-10 10:10:26.860282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.543 [2024-06-10 10:10:26.860501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:37.543 [2024-06-10 10:10:26.860677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.067 ms 00:21:37.543 [2024-06-10 10:10:26.860801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.543 [2024-06-10 10:10:26.860907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.543 [2024-06-10 10:10:26.861008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:37.543 [2024-06-10 10:10:26.861122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:37.543 [2024-06-10 10:10:26.861180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.543 [2024-06-10 10:10:26.861655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.543 [2024-06-10 10:10:26.861800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:37.543 [2024-06-10 10:10:26.861912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:21:37.543 [2024-06-10 10:10:26.861973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.543 [2024-06-10 10:10:26.862210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.543 [2024-06-10 10:10:26.862350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:37.543 [2024-06-10 10:10:26.862471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:21:37.543 [2024-06-10 10:10:26.862535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.543 [2024-06-10 10:10:26.879853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.543 [2024-06-10 10:10:26.880088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:37.543 [2024-06-10 10:10:26.880235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.186 ms 00:21:37.543 [2024-06-10 10:10:26.880292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.543 [2024-06-10 10:10:26.893804] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:37.543 [2024-06-10 10:10:26.896581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.543 [2024-06-10 10:10:26.896745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:37.543 [2024-06-10 10:10:26.896868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.127 ms 00:21:37.543 [2024-06-10 10:10:26.896974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.543 [2024-06-10 10:10:26.969627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.543 [2024-06-10 10:10:26.969707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:37.543 [2024-06-10 10:10:26.969734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.496 ms 00:21:37.543 [2024-06-10 10:10:26.969762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.543 [2024-06-10 10:10:26.969997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.543 [2024-06-10 10:10:26.970018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:37.543 [2024-06-10 10:10:26.970038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:21:37.543 [2024-06-10 10:10:26.970050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.543 [2024-06-10 10:10:27.002575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.543 [2024-06-10 10:10:27.002664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:37.543 [2024-06-10 10:10:27.002690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.442 ms 00:21:37.543 [2024-06-10 10:10:27.002704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.544 [2024-06-10 10:10:27.033622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.544 [2024-06-10 10:10:27.033731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:37.544 [2024-06-10 10:10:27.033760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.842 ms 00:21:37.544 [2024-06-10 10:10:27.033774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.544 [2024-06-10 10:10:27.034584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.544 [2024-06-10 10:10:27.034620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:37.544 [2024-06-10 10:10:27.034659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:21:37.544 [2024-06-10 10:10:27.034676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.802 [2024-06-10 10:10:27.126965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.802 [2024-06-10 10:10:27.127033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:37.802 [2024-06-10 10:10:27.127073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.171 ms 00:21:37.802 [2024-06-10 10:10:27.127086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.802 [2024-06-10 10:10:27.160752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.802 [2024-06-10 10:10:27.160818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:37.802 [2024-06-10 10:10:27.160843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.576 ms 00:21:37.802 [2024-06-10 10:10:27.160865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.802 [2024-06-10 10:10:27.194015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.802 [2024-06-10 10:10:27.194084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:37.802 [2024-06-10 10:10:27.194110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.043 ms 00:21:37.802 [2024-06-10 10:10:27.194123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.802 [2024-06-10 10:10:27.226089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.802 [2024-06-10 10:10:27.226137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:37.802 [2024-06-10 10:10:27.226176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.892 ms 00:21:37.802 [2024-06-10 10:10:27.226189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.802 [2024-06-10 10:10:27.226255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.802 [2024-06-10 10:10:27.226276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:37.802 [2024-06-10 10:10:27.226292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:37.802 [2024-06-10 10:10:27.226308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.802 [2024-06-10 10:10:27.226428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.802 [2024-06-10 10:10:27.226448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:37.802 [2024-06-10 10:10:27.226464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:37.802 [2024-06-10 10:10:27.226478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.802 [2024-06-10 10:10:27.227516] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2581.557 ms, result 0 00:21:37.802 { 00:21:37.802 "name": "ftl0", 00:21:37.802 "uuid": "4ef5368c-bce2-41c6-87e9-246a186c5c8a" 00:21:37.802 } 00:21:37.802 10:10:27 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:37.802 10:10:27 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:38.369 10:10:27 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:38.369 10:10:27 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:38.369 [2024-06-10 10:10:27.843279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.369 [2024-06-10 10:10:27.843352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:38.369 [2024-06-10 10:10:27.843378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:38.369 [2024-06-10 10:10:27.843394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.369 [2024-06-10 10:10:27.843430] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:38.369 [2024-06-10 10:10:27.846718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.369 [2024-06-10 10:10:27.846754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:38.369 [2024-06-10 10:10:27.846773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.260 ms 00:21:38.369 [2024-06-10 10:10:27.846786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.369 [2024-06-10 10:10:27.847141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.369 [2024-06-10 10:10:27.847168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:38.369 [2024-06-10 10:10:27.847195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:21:38.369 [2024-06-10 10:10:27.847207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.369 [2024-06-10 10:10:27.850707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.369 [2024-06-10 10:10:27.850770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:38.369 [2024-06-10 10:10:27.850807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.453 ms 00:21:38.369 [2024-06-10 10:10:27.850831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.369 [2024-06-10 10:10:27.857796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.369 [2024-06-10 10:10:27.857841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:38.369 [2024-06-10 10:10:27.857862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.910 ms 00:21:38.369 [2024-06-10 10:10:27.857885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.629 [2024-06-10 10:10:27.889257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.629 [2024-06-10 10:10:27.889318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:38.629 [2024-06-10 10:10:27.889341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.253 ms 00:21:38.629 [2024-06-10 10:10:27.889354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.629 [2024-06-10 10:10:27.908204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.629 [2024-06-10 10:10:27.908265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:38.629 [2024-06-10 10:10:27.908292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.781 ms 00:21:38.629 [2024-06-10 10:10:27.908305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.629 [2024-06-10 10:10:27.908510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.629 [2024-06-10 10:10:27.908532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:38.629 [2024-06-10 10:10:27.908549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:21:38.629 [2024-06-10 10:10:27.908561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.629 [2024-06-10 10:10:27.939865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.629 [2024-06-10 10:10:27.939922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:38.629 [2024-06-10 10:10:27.939948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.275 ms 00:21:38.629 [2024-06-10 10:10:27.939961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.629 [2024-06-10 10:10:27.970811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.629 [2024-06-10 10:10:27.970866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:38.629 [2024-06-10 10:10:27.970887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.792 ms 00:21:38.629 [2024-06-10 10:10:27.970900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.629 [2024-06-10 10:10:28.001487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.629 [2024-06-10 10:10:28.001546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:38.629 [2024-06-10 10:10:28.001572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.526 ms 00:21:38.629 [2024-06-10 10:10:28.001584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.629 [2024-06-10 10:10:28.032332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.629 [2024-06-10 10:10:28.032388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:38.629 [2024-06-10 10:10:28.032410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.585 ms 00:21:38.629 [2024-06-10 10:10:28.032423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.629 [2024-06-10 10:10:28.032484] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:38.629 [2024-06-10 10:10:28.032510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:38.629 [2024-06-10 10:10:28.032846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.032859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.032873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.032885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.032901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.032914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.032928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.032941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.032955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.032967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.032982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.032994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:38.630 [2024-06-10 10:10:28.033961] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:38.630 [2024-06-10 10:10:28.033975] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ef5368c-bce2-41c6-87e9-246a186c5c8a 00:21:38.630 [2024-06-10 10:10:28.033992] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:38.630 [2024-06-10 10:10:28.034005] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:38.630 [2024-06-10 10:10:28.034017] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:38.630 [2024-06-10 10:10:28.034032] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:38.630 [2024-06-10 10:10:28.034044] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:38.630 [2024-06-10 10:10:28.034058] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:38.630 [2024-06-10 10:10:28.034070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:38.630 [2024-06-10 10:10:28.034082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:38.630 [2024-06-10 10:10:28.034092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:38.630 [2024-06-10 10:10:28.034106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.630 [2024-06-10 10:10:28.034118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:38.631 [2024-06-10 10:10:28.034133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.626 ms 00:21:38.631 [2024-06-10 10:10:28.034145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.631 [2024-06-10 10:10:28.050743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.631 [2024-06-10 10:10:28.050796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:38.631 [2024-06-10 10:10:28.050817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.523 ms 00:21:38.631 [2024-06-10 10:10:28.050831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.631 [2024-06-10 10:10:28.051294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.631 [2024-06-10 10:10:28.051316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:38.631 [2024-06-10 10:10:28.051335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:21:38.631 [2024-06-10 10:10:28.051347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.631 [2024-06-10 10:10:28.105106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.631 [2024-06-10 10:10:28.105170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:38.631 [2024-06-10 10:10:28.105194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.631 [2024-06-10 10:10:28.105207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.631 [2024-06-10 10:10:28.105303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.631 [2024-06-10 10:10:28.105320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:38.631 [2024-06-10 10:10:28.105334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.631 [2024-06-10 10:10:28.105346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.631 [2024-06-10 10:10:28.105478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.631 [2024-06-10 10:10:28.105499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:38.631 [2024-06-10 10:10:28.105515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.631 [2024-06-10 10:10:28.105527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.631 [2024-06-10 10:10:28.105557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.631 [2024-06-10 10:10:28.105572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:38.631 [2024-06-10 10:10:28.105590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.631 [2024-06-10 10:10:28.105602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.889 [2024-06-10 10:10:28.209404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.889 [2024-06-10 10:10:28.209470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:38.889 [2024-06-10 10:10:28.209493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.889 [2024-06-10 10:10:28.209507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.889 [2024-06-10 10:10:28.292086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.889 [2024-06-10 10:10:28.292156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:38.889 [2024-06-10 10:10:28.292196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.889 [2024-06-10 10:10:28.292209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.889 [2024-06-10 10:10:28.292328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.889 [2024-06-10 10:10:28.292346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:38.889 [2024-06-10 10:10:28.292361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.889 [2024-06-10 10:10:28.292374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.889 [2024-06-10 10:10:28.292439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.889 [2024-06-10 10:10:28.292456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:38.889 [2024-06-10 10:10:28.292474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.889 [2024-06-10 10:10:28.292486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.889 [2024-06-10 10:10:28.292608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.889 [2024-06-10 10:10:28.292629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:38.889 [2024-06-10 10:10:28.292644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.889 [2024-06-10 10:10:28.292698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.889 [2024-06-10 10:10:28.292792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.889 [2024-06-10 10:10:28.292812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:38.889 [2024-06-10 10:10:28.292827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.889 [2024-06-10 10:10:28.292839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.889 [2024-06-10 10:10:28.292891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.889 [2024-06-10 10:10:28.292910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:38.889 [2024-06-10 10:10:28.292925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.889 [2024-06-10 10:10:28.292936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.889 [2024-06-10 10:10:28.292996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:38.889 [2024-06-10 10:10:28.293013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:38.889 [2024-06-10 10:10:28.293031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:38.889 [2024-06-10 10:10:28.293043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.889 [2024-06-10 10:10:28.293205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 449.885 ms, result 0 00:21:38.889 true 00:21:38.889 10:10:28 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 81295 00:21:38.889 10:10:28 ftl.ftl_restore -- common/autotest_common.sh@949 -- # '[' -z 81295 ']' 00:21:38.889 10:10:28 ftl.ftl_restore -- common/autotest_common.sh@953 -- # kill -0 81295 00:21:38.889 10:10:28 ftl.ftl_restore -- common/autotest_common.sh@954 -- # uname 00:21:38.889 10:10:28 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:38.889 10:10:28 ftl.ftl_restore -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 81295 00:21:38.889 10:10:28 ftl.ftl_restore -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:38.889 killing process with pid 81295 00:21:38.889 10:10:28 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:38.889 10:10:28 ftl.ftl_restore -- common/autotest_common.sh@967 -- # echo 'killing process with pid 81295' 00:21:38.889 10:10:28 ftl.ftl_restore -- common/autotest_common.sh@968 -- # kill 81295 00:21:38.889 10:10:28 ftl.ftl_restore -- common/autotest_common.sh@973 -- # wait 81295 00:21:42.173 10:10:31 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:46.358 262144+0 records in 00:21:46.358 262144+0 records out 00:21:46.358 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.78689 s, 224 MB/s 00:21:46.358 10:10:35 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:48.890 10:10:38 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:48.890 [2024-06-10 10:10:38.141379] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:21:48.890 [2024-06-10 10:10:38.141532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81537 ] 00:21:48.890 [2024-06-10 10:10:38.336358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.147 [2024-06-10 10:10:38.550543] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.406 [2024-06-10 10:10:38.860530] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:49.406 [2024-06-10 10:10:38.860622] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:49.666 [2024-06-10 10:10:39.015345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.666 [2024-06-10 10:10:39.015423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:49.666 [2024-06-10 10:10:39.015444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:49.666 [2024-06-10 10:10:39.015457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.666 [2024-06-10 10:10:39.015531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.666 [2024-06-10 10:10:39.015553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:49.666 [2024-06-10 10:10:39.015567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:49.666 [2024-06-10 10:10:39.015578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.666 [2024-06-10 10:10:39.015614] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:49.666 [2024-06-10 10:10:39.016788] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:49.666 [2024-06-10 10:10:39.016964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.666 [2024-06-10 10:10:39.016986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:49.666 [2024-06-10 10:10:39.017007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.354 ms 00:21:49.666 [2024-06-10 10:10:39.017019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.666 [2024-06-10 10:10:39.018207] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:49.666 [2024-06-10 10:10:39.034835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.666 [2024-06-10 10:10:39.034882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:49.666 [2024-06-10 10:10:39.034900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.630 ms 00:21:49.666 [2024-06-10 10:10:39.034912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.666 [2024-06-10 10:10:39.034985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.666 [2024-06-10 10:10:39.035005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:49.666 [2024-06-10 10:10:39.035018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:49.666 [2024-06-10 10:10:39.035034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.666 [2024-06-10 10:10:39.039773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.666 [2024-06-10 10:10:39.039818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:49.666 [2024-06-10 10:10:39.039834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.648 ms 00:21:49.666 [2024-06-10 10:10:39.039845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.666 [2024-06-10 10:10:39.039941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.666 [2024-06-10 10:10:39.039959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:49.666 [2024-06-10 10:10:39.039976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:49.666 [2024-06-10 10:10:39.039987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.666 [2024-06-10 10:10:39.040048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.666 [2024-06-10 10:10:39.040065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:49.666 [2024-06-10 10:10:39.040077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:49.666 [2024-06-10 10:10:39.040088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.666 [2024-06-10 10:10:39.040121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:49.666 [2024-06-10 10:10:39.044389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.666 [2024-06-10 10:10:39.044427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:49.666 [2024-06-10 10:10:39.044459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.277 ms 00:21:49.666 [2024-06-10 10:10:39.044470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.666 [2024-06-10 10:10:39.044512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.666 [2024-06-10 10:10:39.044530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:49.666 [2024-06-10 10:10:39.044542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:49.666 [2024-06-10 10:10:39.044553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.666 [2024-06-10 10:10:39.044597] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:49.666 [2024-06-10 10:10:39.044628] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:49.666 [2024-06-10 10:10:39.044683] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:49.666 [2024-06-10 10:10:39.044706] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:49.666 [2024-06-10 10:10:39.044814] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:49.666 [2024-06-10 10:10:39.044830] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:49.666 [2024-06-10 10:10:39.044844] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:49.666 [2024-06-10 10:10:39.044859] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:49.667 [2024-06-10 10:10:39.044871] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:49.667 [2024-06-10 10:10:39.044883] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:49.667 [2024-06-10 10:10:39.044894] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:49.667 [2024-06-10 10:10:39.044905] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:49.667 [2024-06-10 10:10:39.044915] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:49.667 [2024-06-10 10:10:39.044927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.667 [2024-06-10 10:10:39.044938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:49.667 [2024-06-10 10:10:39.044955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:21:49.667 [2024-06-10 10:10:39.044965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.667 [2024-06-10 10:10:39.045052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.667 [2024-06-10 10:10:39.045066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:49.667 [2024-06-10 10:10:39.045077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:49.667 [2024-06-10 10:10:39.045088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.667 [2024-06-10 10:10:39.045218] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:49.667 [2024-06-10 10:10:39.045237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:49.667 [2024-06-10 10:10:39.045249] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:49.667 [2024-06-10 10:10:39.045267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.667 [2024-06-10 10:10:39.045279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:49.667 [2024-06-10 10:10:39.045289] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:49.667 [2024-06-10 10:10:39.045300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:49.667 [2024-06-10 10:10:39.045311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:49.667 [2024-06-10 10:10:39.045323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:49.667 [2024-06-10 10:10:39.045349] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:49.667 [2024-06-10 10:10:39.045359] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:49.667 [2024-06-10 10:10:39.045370] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:49.667 [2024-06-10 10:10:39.045380] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:49.667 [2024-06-10 10:10:39.045390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:49.667 [2024-06-10 10:10:39.045402] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:49.667 [2024-06-10 10:10:39.045413] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.667 [2024-06-10 10:10:39.045424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:49.667 [2024-06-10 10:10:39.045434] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:49.667 [2024-06-10 10:10:39.045445] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.667 [2024-06-10 10:10:39.045455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:49.667 [2024-06-10 10:10:39.045466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:49.667 [2024-06-10 10:10:39.045476] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:49.667 [2024-06-10 10:10:39.045500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:49.667 [2024-06-10 10:10:39.045510] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:49.667 [2024-06-10 10:10:39.045520] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:49.667 [2024-06-10 10:10:39.045530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:49.667 [2024-06-10 10:10:39.045541] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:49.667 [2024-06-10 10:10:39.045551] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:49.667 [2024-06-10 10:10:39.045561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:49.667 [2024-06-10 10:10:39.045571] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:49.667 [2024-06-10 10:10:39.045582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:49.667 [2024-06-10 10:10:39.045593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:49.667 [2024-06-10 10:10:39.045604] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:49.667 [2024-06-10 10:10:39.045614] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:49.667 [2024-06-10 10:10:39.045624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:49.667 [2024-06-10 10:10:39.045635] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:49.667 [2024-06-10 10:10:39.045645] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:49.667 [2024-06-10 10:10:39.045671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:49.667 [2024-06-10 10:10:39.045685] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:49.667 [2024-06-10 10:10:39.045696] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.667 [2024-06-10 10:10:39.045706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:49.667 [2024-06-10 10:10:39.045717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:49.667 [2024-06-10 10:10:39.045743] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.667 [2024-06-10 10:10:39.045753] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:49.667 [2024-06-10 10:10:39.045764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:49.667 [2024-06-10 10:10:39.045774] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:49.667 [2024-06-10 10:10:39.045785] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.667 [2024-06-10 10:10:39.045796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:49.667 [2024-06-10 10:10:39.045807] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:49.667 [2024-06-10 10:10:39.045817] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:49.667 [2024-06-10 10:10:39.045827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:49.667 [2024-06-10 10:10:39.045837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:49.667 [2024-06-10 10:10:39.045863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:49.667 [2024-06-10 10:10:39.045875] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:49.667 [2024-06-10 10:10:39.045890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:49.667 [2024-06-10 10:10:39.045903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:49.667 [2024-06-10 10:10:39.045915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:49.667 [2024-06-10 10:10:39.045926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:49.667 [2024-06-10 10:10:39.045938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:49.667 [2024-06-10 10:10:39.045949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:49.667 [2024-06-10 10:10:39.045961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:49.667 [2024-06-10 10:10:39.045972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:49.667 [2024-06-10 10:10:39.045984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:49.667 [2024-06-10 10:10:39.045996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:49.667 [2024-06-10 10:10:39.046007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:49.667 [2024-06-10 10:10:39.046019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:49.667 [2024-06-10 10:10:39.046030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:49.667 [2024-06-10 10:10:39.046042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:49.667 [2024-06-10 10:10:39.046054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:49.667 [2024-06-10 10:10:39.046065] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:49.667 [2024-06-10 10:10:39.046078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:49.667 [2024-06-10 10:10:39.046092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:49.667 [2024-06-10 10:10:39.046104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:49.667 [2024-06-10 10:10:39.046116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:49.667 [2024-06-10 10:10:39.046127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:49.667 [2024-06-10 10:10:39.046140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.667 [2024-06-10 10:10:39.046152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:49.667 [2024-06-10 10:10:39.046173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:21:49.667 [2024-06-10 10:10:39.046185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.667 [2024-06-10 10:10:39.095138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.667 [2024-06-10 10:10:39.095224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:49.667 [2024-06-10 10:10:39.095250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.868 ms 00:21:49.667 [2024-06-10 10:10:39.095263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.667 [2024-06-10 10:10:39.095388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.667 [2024-06-10 10:10:39.095404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:49.668 [2024-06-10 10:10:39.095417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:49.668 [2024-06-10 10:10:39.095428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.668 [2024-06-10 10:10:39.133907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.668 [2024-06-10 10:10:39.133972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:49.668 [2024-06-10 10:10:39.134008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.388 ms 00:21:49.668 [2024-06-10 10:10:39.134020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.668 [2024-06-10 10:10:39.134092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.668 [2024-06-10 10:10:39.134108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:49.668 [2024-06-10 10:10:39.134122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:49.668 [2024-06-10 10:10:39.134133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.668 [2024-06-10 10:10:39.134526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.668 [2024-06-10 10:10:39.134550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:49.668 [2024-06-10 10:10:39.134563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:21:49.668 [2024-06-10 10:10:39.134573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.668 [2024-06-10 10:10:39.134768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.668 [2024-06-10 10:10:39.134801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:49.668 [2024-06-10 10:10:39.134814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:21:49.668 [2024-06-10 10:10:39.134824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.668 [2024-06-10 10:10:39.150887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.668 [2024-06-10 10:10:39.150942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:49.668 [2024-06-10 10:10:39.150978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.031 ms 00:21:49.668 [2024-06-10 10:10:39.150989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.668 [2024-06-10 10:10:39.167963] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:49.668 [2024-06-10 10:10:39.168031] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:49.668 [2024-06-10 10:10:39.168069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.668 [2024-06-10 10:10:39.168081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:49.668 [2024-06-10 10:10:39.168096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.903 ms 00:21:49.668 [2024-06-10 10:10:39.168106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.928 [2024-06-10 10:10:39.198174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.928 [2024-06-10 10:10:39.198219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:49.928 [2024-06-10 10:10:39.198237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.012 ms 00:21:49.928 [2024-06-10 10:10:39.198249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.928 [2024-06-10 10:10:39.214203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.928 [2024-06-10 10:10:39.214285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:49.928 [2024-06-10 10:10:39.214307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.889 ms 00:21:49.928 [2024-06-10 10:10:39.214325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.928 [2024-06-10 10:10:39.230849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.928 [2024-06-10 10:10:39.230918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:49.928 [2024-06-10 10:10:39.230939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.436 ms 00:21:49.928 [2024-06-10 10:10:39.230950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.928 [2024-06-10 10:10:39.231852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.928 [2024-06-10 10:10:39.231889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:49.928 [2024-06-10 10:10:39.231905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:21:49.928 [2024-06-10 10:10:39.231917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.928 [2024-06-10 10:10:39.304673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.928 [2024-06-10 10:10:39.304742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:49.928 [2024-06-10 10:10:39.304779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.715 ms 00:21:49.928 [2024-06-10 10:10:39.304792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.928 [2024-06-10 10:10:39.317525] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:49.928 [2024-06-10 10:10:39.320168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.928 [2024-06-10 10:10:39.320218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:49.928 [2024-06-10 10:10:39.320243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.306 ms 00:21:49.928 [2024-06-10 10:10:39.320255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.928 [2024-06-10 10:10:39.320358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.928 [2024-06-10 10:10:39.320378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:49.928 [2024-06-10 10:10:39.320392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:49.928 [2024-06-10 10:10:39.320403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.928 [2024-06-10 10:10:39.320492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.928 [2024-06-10 10:10:39.320511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:49.928 [2024-06-10 10:10:39.320524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:49.928 [2024-06-10 10:10:39.320541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.928 [2024-06-10 10:10:39.320574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.928 [2024-06-10 10:10:39.320588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:49.928 [2024-06-10 10:10:39.320600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:49.928 [2024-06-10 10:10:39.320611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.928 [2024-06-10 10:10:39.320820] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:49.928 [2024-06-10 10:10:39.320888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.928 [2024-06-10 10:10:39.320932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:49.928 [2024-06-10 10:10:39.320973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:49.928 [2024-06-10 10:10:39.321016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.928 [2024-06-10 10:10:39.353070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.928 [2024-06-10 10:10:39.353253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:49.928 [2024-06-10 10:10:39.353420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.905 ms 00:21:49.928 [2024-06-10 10:10:39.353444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.928 [2024-06-10 10:10:39.353528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.928 [2024-06-10 10:10:39.353548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:49.928 [2024-06-10 10:10:39.353561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:49.928 [2024-06-10 10:10:39.353580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.928 [2024-06-10 10:10:39.354778] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 338.909 ms, result 0 00:22:28.532  Copying: 25/1024 [MB] (25 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 79/1024 [MB] (26 MBps) Copying: 106/1024 [MB] (27 MBps) Copying: 133/1024 [MB] (26 MBps) Copying: 159/1024 [MB] (26 MBps) Copying: 185/1024 [MB] (26 MBps) Copying: 211/1024 [MB] (26 MBps) Copying: 238/1024 [MB] (26 MBps) Copying: 264/1024 [MB] (26 MBps) Copying: 290/1024 [MB] (26 MBps) Copying: 317/1024 [MB] (26 MBps) Copying: 343/1024 [MB] (26 MBps) Copying: 369/1024 [MB] (25 MBps) Copying: 395/1024 [MB] (26 MBps) Copying: 422/1024 [MB] (27 MBps) Copying: 448/1024 [MB] (25 MBps) Copying: 474/1024 [MB] (25 MBps) Copying: 501/1024 [MB] (27 MBps) Copying: 528/1024 [MB] (27 MBps) Copying: 554/1024 [MB] (25 MBps) Copying: 580/1024 [MB] (26 MBps) Copying: 607/1024 [MB] (26 MBps) Copying: 632/1024 [MB] (25 MBps) Copying: 658/1024 [MB] (25 MBps) Copying: 684/1024 [MB] (26 MBps) Copying: 710/1024 [MB] (26 MBps) Copying: 736/1024 [MB] (26 MBps) Copying: 763/1024 [MB] (26 MBps) Copying: 789/1024 [MB] (26 MBps) Copying: 815/1024 [MB] (26 MBps) Copying: 841/1024 [MB] (26 MBps) Copying: 868/1024 [MB] (26 MBps) Copying: 896/1024 [MB] (27 MBps) Copying: 924/1024 [MB] (27 MBps) Copying: 951/1024 [MB] (27 MBps) Copying: 979/1024 [MB] (27 MBps) Copying: 1006/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-06-10 10:11:18.030303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.532 [2024-06-10 10:11:18.030367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:28.532 [2024-06-10 10:11:18.030397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:28.532 [2024-06-10 10:11:18.030409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.532 [2024-06-10 10:11:18.030440] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:28.532 [2024-06-10 10:11:18.033808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.532 [2024-06-10 10:11:18.033845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:28.532 [2024-06-10 10:11:18.033861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.345 ms 00:22:28.532 [2024-06-10 10:11:18.033872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.532 [2024-06-10 10:11:18.035393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.532 [2024-06-10 10:11:18.035438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:28.532 [2024-06-10 10:11:18.035463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.494 ms 00:22:28.532 [2024-06-10 10:11:18.035475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.792 [2024-06-10 10:11:18.051897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.792 [2024-06-10 10:11:18.051945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:28.792 [2024-06-10 10:11:18.051964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.397 ms 00:22:28.792 [2024-06-10 10:11:18.051976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.792 [2024-06-10 10:11:18.058763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.792 [2024-06-10 10:11:18.058802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:28.792 [2024-06-10 10:11:18.058818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.743 ms 00:22:28.792 [2024-06-10 10:11:18.058839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.792 [2024-06-10 10:11:18.091195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.792 [2024-06-10 10:11:18.091240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:28.792 [2024-06-10 10:11:18.091257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.284 ms 00:22:28.792 [2024-06-10 10:11:18.091269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.792 [2024-06-10 10:11:18.109609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.792 [2024-06-10 10:11:18.109695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:28.792 [2024-06-10 10:11:18.109715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.291 ms 00:22:28.792 [2024-06-10 10:11:18.109727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.792 [2024-06-10 10:11:18.109889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.792 [2024-06-10 10:11:18.109911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:28.792 [2024-06-10 10:11:18.109924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:22:28.793 [2024-06-10 10:11:18.109936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.793 [2024-06-10 10:11:18.141855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.793 [2024-06-10 10:11:18.141900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:28.793 [2024-06-10 10:11:18.141918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.897 ms 00:22:28.793 [2024-06-10 10:11:18.141929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.793 [2024-06-10 10:11:18.174134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.793 [2024-06-10 10:11:18.174188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:28.793 [2024-06-10 10:11:18.174221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.160 ms 00:22:28.793 [2024-06-10 10:11:18.174232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.793 [2024-06-10 10:11:18.205641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.793 [2024-06-10 10:11:18.205711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:28.793 [2024-06-10 10:11:18.205746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.366 ms 00:22:28.793 [2024-06-10 10:11:18.205758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.793 [2024-06-10 10:11:18.236511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.793 [2024-06-10 10:11:18.236553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:28.793 [2024-06-10 10:11:18.236586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.650 ms 00:22:28.793 [2024-06-10 10:11:18.236597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.793 [2024-06-10 10:11:18.236655] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:28.793 [2024-06-10 10:11:18.236696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.236996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:28.793 [2024-06-10 10:11:18.237575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:28.794 [2024-06-10 10:11:18.237916] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:28.794 [2024-06-10 10:11:18.237928] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ef5368c-bce2-41c6-87e9-246a186c5c8a 00:22:28.794 [2024-06-10 10:11:18.237939] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:28.794 [2024-06-10 10:11:18.237950] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:28.794 [2024-06-10 10:11:18.237961] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:28.794 [2024-06-10 10:11:18.237972] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:28.794 [2024-06-10 10:11:18.237991] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:28.794 [2024-06-10 10:11:18.238002] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:28.794 [2024-06-10 10:11:18.238013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:28.794 [2024-06-10 10:11:18.238024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:28.794 [2024-06-10 10:11:18.238034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:28.794 [2024-06-10 10:11:18.238045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.794 [2024-06-10 10:11:18.238057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:28.794 [2024-06-10 10:11:18.238068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:22:28.794 [2024-06-10 10:11:18.238079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.794 [2024-06-10 10:11:18.254765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.794 [2024-06-10 10:11:18.254811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:28.794 [2024-06-10 10:11:18.254848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.644 ms 00:22:28.794 [2024-06-10 10:11:18.254859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.794 [2024-06-10 10:11:18.255298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.794 [2024-06-10 10:11:18.255328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:28.794 [2024-06-10 10:11:18.255341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:22:28.794 [2024-06-10 10:11:18.255352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.794 [2024-06-10 10:11:18.292351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.794 [2024-06-10 10:11:18.292415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:28.794 [2024-06-10 10:11:18.292449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.794 [2024-06-10 10:11:18.292461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.794 [2024-06-10 10:11:18.292539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.794 [2024-06-10 10:11:18.292554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:28.794 [2024-06-10 10:11:18.292566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.794 [2024-06-10 10:11:18.292577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.794 [2024-06-10 10:11:18.292686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.794 [2024-06-10 10:11:18.292709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:28.794 [2024-06-10 10:11:18.292728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.794 [2024-06-10 10:11:18.292740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.794 [2024-06-10 10:11:18.292764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.794 [2024-06-10 10:11:18.292778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:28.794 [2024-06-10 10:11:18.292789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.794 [2024-06-10 10:11:18.292799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.053 [2024-06-10 10:11:18.393913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.053 [2024-06-10 10:11:18.393981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:29.053 [2024-06-10 10:11:18.394025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.053 [2024-06-10 10:11:18.394037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.053 [2024-06-10 10:11:18.479389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.053 [2024-06-10 10:11:18.479465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:29.053 [2024-06-10 10:11:18.479485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.053 [2024-06-10 10:11:18.479498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.053 [2024-06-10 10:11:18.479578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.054 [2024-06-10 10:11:18.479593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:29.054 [2024-06-10 10:11:18.479605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.054 [2024-06-10 10:11:18.479629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.054 [2024-06-10 10:11:18.479702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.054 [2024-06-10 10:11:18.479719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:29.054 [2024-06-10 10:11:18.479731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.054 [2024-06-10 10:11:18.479742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.054 [2024-06-10 10:11:18.479876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.054 [2024-06-10 10:11:18.479896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:29.054 [2024-06-10 10:11:18.479909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.054 [2024-06-10 10:11:18.479920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.054 [2024-06-10 10:11:18.479978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.054 [2024-06-10 10:11:18.479997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:29.054 [2024-06-10 10:11:18.480009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.054 [2024-06-10 10:11:18.480020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.054 [2024-06-10 10:11:18.480063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.054 [2024-06-10 10:11:18.480078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:29.054 [2024-06-10 10:11:18.480089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.054 [2024-06-10 10:11:18.480100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.054 [2024-06-10 10:11:18.480155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:29.054 [2024-06-10 10:11:18.480172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:29.054 [2024-06-10 10:11:18.480184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:29.054 [2024-06-10 10:11:18.480195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.054 [2024-06-10 10:11:18.480337] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 449.999 ms, result 0 00:22:30.436 00:22:30.436 00:22:30.436 10:11:19 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:30.436 [2024-06-10 10:11:19.720613] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:22:30.436 [2024-06-10 10:11:19.720838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81952 ] 00:22:30.436 [2024-06-10 10:11:19.889964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.736 [2024-06-10 10:11:20.110948] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.015 [2024-06-10 10:11:20.444847] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:31.015 [2024-06-10 10:11:20.444934] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:31.273 [2024-06-10 10:11:20.598564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.273 [2024-06-10 10:11:20.598631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:31.273 [2024-06-10 10:11:20.598670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:31.273 [2024-06-10 10:11:20.598683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.273 [2024-06-10 10:11:20.598757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.273 [2024-06-10 10:11:20.598778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:31.273 [2024-06-10 10:11:20.598791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:31.273 [2024-06-10 10:11:20.598802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.273 [2024-06-10 10:11:20.598838] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:31.273 [2024-06-10 10:11:20.599775] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:31.273 [2024-06-10 10:11:20.599819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.273 [2024-06-10 10:11:20.599834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:31.273 [2024-06-10 10:11:20.599852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:22:31.273 [2024-06-10 10:11:20.599863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.273 [2024-06-10 10:11:20.600960] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:31.273 [2024-06-10 10:11:20.617283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.273 [2024-06-10 10:11:20.617325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:31.273 [2024-06-10 10:11:20.617359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.325 ms 00:22:31.273 [2024-06-10 10:11:20.617370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.273 [2024-06-10 10:11:20.617441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.273 [2024-06-10 10:11:20.617460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:31.273 [2024-06-10 10:11:20.617473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:31.273 [2024-06-10 10:11:20.617487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.273 [2024-06-10 10:11:20.622035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.273 [2024-06-10 10:11:20.622254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:31.273 [2024-06-10 10:11:20.622379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.443 ms 00:22:31.273 [2024-06-10 10:11:20.622402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.273 [2024-06-10 10:11:20.622505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.273 [2024-06-10 10:11:20.622524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:31.273 [2024-06-10 10:11:20.622540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:31.273 [2024-06-10 10:11:20.622551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.273 [2024-06-10 10:11:20.622619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.273 [2024-06-10 10:11:20.622778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:31.273 [2024-06-10 10:11:20.622839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:31.273 [2024-06-10 10:11:20.622880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.273 [2024-06-10 10:11:20.623038] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:31.273 [2024-06-10 10:11:20.627439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.273 [2024-06-10 10:11:20.627630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:31.273 [2024-06-10 10:11:20.627788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.412 ms 00:22:31.273 [2024-06-10 10:11:20.627839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.273 [2024-06-10 10:11:20.627922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.273 [2024-06-10 10:11:20.628010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:31.273 [2024-06-10 10:11:20.628077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:31.273 [2024-06-10 10:11:20.628115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.273 [2024-06-10 10:11:20.628211] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:31.273 [2024-06-10 10:11:20.628324] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:31.273 [2024-06-10 10:11:20.628418] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:31.273 [2024-06-10 10:11:20.628487] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:31.273 [2024-06-10 10:11:20.628718] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:31.273 [2024-06-10 10:11:20.628913] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:31.273 [2024-06-10 10:11:20.629070] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:31.273 [2024-06-10 10:11:20.629143] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:31.273 [2024-06-10 10:11:20.629270] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:31.273 [2024-06-10 10:11:20.629400] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:31.273 [2024-06-10 10:11:20.629447] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:31.273 [2024-06-10 10:11:20.629526] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:31.273 [2024-06-10 10:11:20.629572] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:31.273 [2024-06-10 10:11:20.629611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.273 [2024-06-10 10:11:20.629676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:31.273 [2024-06-10 10:11:20.629730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.403 ms 00:22:31.273 [2024-06-10 10:11:20.629810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.273 [2024-06-10 10:11:20.629945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.273 [2024-06-10 10:11:20.629993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:31.273 [2024-06-10 10:11:20.630092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:31.273 [2024-06-10 10:11:20.630107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.273 [2024-06-10 10:11:20.630218] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:31.273 [2024-06-10 10:11:20.630237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:31.273 [2024-06-10 10:11:20.630249] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:31.274 [2024-06-10 10:11:20.630268] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:31.274 [2024-06-10 10:11:20.630280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:31.274 [2024-06-10 10:11:20.630290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:31.274 [2024-06-10 10:11:20.630301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:31.274 [2024-06-10 10:11:20.630311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:31.274 [2024-06-10 10:11:20.630322] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:31.274 [2024-06-10 10:11:20.630332] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:31.274 [2024-06-10 10:11:20.630342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:31.274 [2024-06-10 10:11:20.630352] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:31.274 [2024-06-10 10:11:20.630362] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:31.274 [2024-06-10 10:11:20.630372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:31.274 [2024-06-10 10:11:20.630383] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:31.274 [2024-06-10 10:11:20.630393] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:31.274 [2024-06-10 10:11:20.630403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:31.274 [2024-06-10 10:11:20.630413] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:31.274 [2024-06-10 10:11:20.630423] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:31.274 [2024-06-10 10:11:20.630433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:31.274 [2024-06-10 10:11:20.630443] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:31.274 [2024-06-10 10:11:20.630453] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:31.274 [2024-06-10 10:11:20.630476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:31.274 [2024-06-10 10:11:20.630487] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:31.274 [2024-06-10 10:11:20.630498] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:31.274 [2024-06-10 10:11:20.630508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:31.274 [2024-06-10 10:11:20.630518] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:31.274 [2024-06-10 10:11:20.630528] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:31.274 [2024-06-10 10:11:20.630538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:31.274 [2024-06-10 10:11:20.630548] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:31.274 [2024-06-10 10:11:20.630558] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:31.274 [2024-06-10 10:11:20.630568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:31.274 [2024-06-10 10:11:20.630579] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:31.274 [2024-06-10 10:11:20.630589] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:31.274 [2024-06-10 10:11:20.630599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:31.274 [2024-06-10 10:11:20.630611] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:31.274 [2024-06-10 10:11:20.630622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:31.274 [2024-06-10 10:11:20.630632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:31.274 [2024-06-10 10:11:20.630842] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:31.274 [2024-06-10 10:11:20.630889] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:31.274 [2024-06-10 10:11:20.630987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:31.274 [2024-06-10 10:11:20.631034] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:31.274 [2024-06-10 10:11:20.631071] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:31.274 [2024-06-10 10:11:20.631106] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:31.274 [2024-06-10 10:11:20.631158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:31.274 [2024-06-10 10:11:20.631204] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:31.274 [2024-06-10 10:11:20.631292] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:31.274 [2024-06-10 10:11:20.631341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:31.274 [2024-06-10 10:11:20.631378] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:31.274 [2024-06-10 10:11:20.631415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:31.274 [2024-06-10 10:11:20.631452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:31.274 [2024-06-10 10:11:20.631489] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:31.274 [2024-06-10 10:11:20.631503] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:31.274 [2024-06-10 10:11:20.631516] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:31.274 [2024-06-10 10:11:20.631531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:31.274 [2024-06-10 10:11:20.631544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:31.274 [2024-06-10 10:11:20.631556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:31.274 [2024-06-10 10:11:20.631567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:31.274 [2024-06-10 10:11:20.631578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:31.274 [2024-06-10 10:11:20.631589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:31.274 [2024-06-10 10:11:20.631600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:31.274 [2024-06-10 10:11:20.631611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:31.274 [2024-06-10 10:11:20.631622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:31.274 [2024-06-10 10:11:20.631633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:31.274 [2024-06-10 10:11:20.631663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:31.274 [2024-06-10 10:11:20.631675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:31.274 [2024-06-10 10:11:20.631687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:31.274 [2024-06-10 10:11:20.631698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:31.274 [2024-06-10 10:11:20.631710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:31.274 [2024-06-10 10:11:20.631721] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:31.274 [2024-06-10 10:11:20.631733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:31.274 [2024-06-10 10:11:20.631746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:31.274 [2024-06-10 10:11:20.631757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:31.274 [2024-06-10 10:11:20.631769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:31.275 [2024-06-10 10:11:20.631780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:31.275 [2024-06-10 10:11:20.631793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.275 [2024-06-10 10:11:20.631805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:31.275 [2024-06-10 10:11:20.631825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.641 ms 00:22:31.275 [2024-06-10 10:11:20.631837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.275 [2024-06-10 10:11:20.675905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.275 [2024-06-10 10:11:20.675971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:31.275 [2024-06-10 10:11:20.676024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.998 ms 00:22:31.275 [2024-06-10 10:11:20.676036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.275 [2024-06-10 10:11:20.676168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.275 [2024-06-10 10:11:20.676185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:31.275 [2024-06-10 10:11:20.676197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:31.275 [2024-06-10 10:11:20.676208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.275 [2024-06-10 10:11:20.714499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.275 [2024-06-10 10:11:20.714571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:31.275 [2024-06-10 10:11:20.714592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.202 ms 00:22:31.275 [2024-06-10 10:11:20.714604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.275 [2024-06-10 10:11:20.714691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.275 [2024-06-10 10:11:20.714711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:31.275 [2024-06-10 10:11:20.714724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:31.275 [2024-06-10 10:11:20.714735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.275 [2024-06-10 10:11:20.715122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.275 [2024-06-10 10:11:20.715164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:31.275 [2024-06-10 10:11:20.715180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:22:31.275 [2024-06-10 10:11:20.715191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.275 [2024-06-10 10:11:20.715352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.275 [2024-06-10 10:11:20.715387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:31.275 [2024-06-10 10:11:20.715401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:22:31.275 [2024-06-10 10:11:20.715412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.275 [2024-06-10 10:11:20.732626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.275 [2024-06-10 10:11:20.732718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:31.275 [2024-06-10 10:11:20.732738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.186 ms 00:22:31.275 [2024-06-10 10:11:20.732750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.275 [2024-06-10 10:11:20.749435] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:31.275 [2024-06-10 10:11:20.749494] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:31.275 [2024-06-10 10:11:20.749518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.275 [2024-06-10 10:11:20.749531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:31.275 [2024-06-10 10:11:20.749544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.621 ms 00:22:31.275 [2024-06-10 10:11:20.749555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.275 [2024-06-10 10:11:20.779326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.275 [2024-06-10 10:11:20.779371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:31.275 [2024-06-10 10:11:20.779388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.724 ms 00:22:31.275 [2024-06-10 10:11:20.779408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.533 [2024-06-10 10:11:20.795513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.533 [2024-06-10 10:11:20.795570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:31.533 [2024-06-10 10:11:20.795603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.072 ms 00:22:31.533 [2024-06-10 10:11:20.795614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.533 [2024-06-10 10:11:20.810996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.533 [2024-06-10 10:11:20.811035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:31.533 [2024-06-10 10:11:20.811079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.323 ms 00:22:31.533 [2024-06-10 10:11:20.811090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.533 [2024-06-10 10:11:20.811948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.533 [2024-06-10 10:11:20.811986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:31.533 [2024-06-10 10:11:20.812001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:22:31.533 [2024-06-10 10:11:20.812014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.533 [2024-06-10 10:11:20.884923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.533 [2024-06-10 10:11:20.884990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:31.533 [2024-06-10 10:11:20.885027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.883 ms 00:22:31.533 [2024-06-10 10:11:20.885038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.533 [2024-06-10 10:11:20.897466] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:31.534 [2024-06-10 10:11:20.900073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.534 [2024-06-10 10:11:20.900111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:31.534 [2024-06-10 10:11:20.900144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.956 ms 00:22:31.534 [2024-06-10 10:11:20.900155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.534 [2024-06-10 10:11:20.900268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.534 [2024-06-10 10:11:20.900286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:31.534 [2024-06-10 10:11:20.900299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:31.534 [2024-06-10 10:11:20.900310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.534 [2024-06-10 10:11:20.900414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.534 [2024-06-10 10:11:20.900432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:31.534 [2024-06-10 10:11:20.900450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:31.534 [2024-06-10 10:11:20.900461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.534 [2024-06-10 10:11:20.900493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.534 [2024-06-10 10:11:20.900508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:31.534 [2024-06-10 10:11:20.900520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:31.534 [2024-06-10 10:11:20.900530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.534 [2024-06-10 10:11:20.900569] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:31.534 [2024-06-10 10:11:20.900585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.534 [2024-06-10 10:11:20.900597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:31.534 [2024-06-10 10:11:20.900623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:31.534 [2024-06-10 10:11:20.900637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.534 [2024-06-10 10:11:20.931472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.534 [2024-06-10 10:11:20.931531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:31.534 [2024-06-10 10:11:20.931564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.812 ms 00:22:31.534 [2024-06-10 10:11:20.931575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.534 [2024-06-10 10:11:20.931673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.534 [2024-06-10 10:11:20.931694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:31.534 [2024-06-10 10:11:20.931714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:31.534 [2024-06-10 10:11:20.931725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.534 [2024-06-10 10:11:20.932959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 333.858 ms, result 0 00:23:11.011  Copying: 27/1024 [MB] (27 MBps) Copying: 54/1024 [MB] (26 MBps) Copying: 81/1024 [MB] (27 MBps) Copying: 108/1024 [MB] (26 MBps) Copying: 133/1024 [MB] (25 MBps) Copying: 160/1024 [MB] (26 MBps) Copying: 185/1024 [MB] (25 MBps) Copying: 213/1024 [MB] (27 MBps) Copying: 240/1024 [MB] (27 MBps) Copying: 267/1024 [MB] (26 MBps) Copying: 294/1024 [MB] (27 MBps) Copying: 321/1024 [MB] (27 MBps) Copying: 349/1024 [MB] (27 MBps) Copying: 376/1024 [MB] (27 MBps) Copying: 403/1024 [MB] (26 MBps) Copying: 430/1024 [MB] (27 MBps) Copying: 457/1024 [MB] (27 MBps) Copying: 483/1024 [MB] (25 MBps) Copying: 509/1024 [MB] (25 MBps) Copying: 536/1024 [MB] (27 MBps) Copying: 563/1024 [MB] (27 MBps) Copying: 590/1024 [MB] (27 MBps) Copying: 617/1024 [MB] (27 MBps) Copying: 643/1024 [MB] (25 MBps) Copying: 669/1024 [MB] (26 MBps) Copying: 695/1024 [MB] (25 MBps) Copying: 721/1024 [MB] (25 MBps) Copying: 748/1024 [MB] (26 MBps) Copying: 775/1024 [MB] (26 MBps) Copying: 801/1024 [MB] (26 MBps) Copying: 826/1024 [MB] (24 MBps) Copying: 853/1024 [MB] (26 MBps) Copying: 879/1024 [MB] (26 MBps) Copying: 905/1024 [MB] (26 MBps) Copying: 930/1024 [MB] (24 MBps) Copying: 955/1024 [MB] (25 MBps) Copying: 979/1024 [MB] (24 MBps) Copying: 1006/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-06-10 10:12:00.259317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.011 [2024-06-10 10:12:00.259443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:11.011 [2024-06-10 10:12:00.259471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:11.011 [2024-06-10 10:12:00.259486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.011 [2024-06-10 10:12:00.259525] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:11.011 [2024-06-10 10:12:00.263796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.011 [2024-06-10 10:12:00.263843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:11.011 [2024-06-10 10:12:00.263863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.242 ms 00:23:11.011 [2024-06-10 10:12:00.263877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.011 [2024-06-10 10:12:00.264177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.011 [2024-06-10 10:12:00.264215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:11.011 [2024-06-10 10:12:00.264233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:23:11.011 [2024-06-10 10:12:00.264246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.011 [2024-06-10 10:12:00.269012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.011 [2024-06-10 10:12:00.269055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:11.011 [2024-06-10 10:12:00.269073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.741 ms 00:23:11.011 [2024-06-10 10:12:00.269087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.011 [2024-06-10 10:12:00.278597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.011 [2024-06-10 10:12:00.278656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:11.011 [2024-06-10 10:12:00.278687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.478 ms 00:23:11.011 [2024-06-10 10:12:00.278701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.011 [2024-06-10 10:12:00.318431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.011 [2024-06-10 10:12:00.318505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:11.011 [2024-06-10 10:12:00.318528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.634 ms 00:23:11.011 [2024-06-10 10:12:00.318542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.011 [2024-06-10 10:12:00.340102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.011 [2024-06-10 10:12:00.340182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:11.011 [2024-06-10 10:12:00.340206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.498 ms 00:23:11.011 [2024-06-10 10:12:00.340221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.011 [2024-06-10 10:12:00.340426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.011 [2024-06-10 10:12:00.340463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:11.011 [2024-06-10 10:12:00.340480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:23:11.011 [2024-06-10 10:12:00.340494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.011 [2024-06-10 10:12:00.379560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.011 [2024-06-10 10:12:00.379630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:11.011 [2024-06-10 10:12:00.379674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.026 ms 00:23:11.011 [2024-06-10 10:12:00.379688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.011 [2024-06-10 10:12:00.418281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.011 [2024-06-10 10:12:00.418374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:11.011 [2024-06-10 10:12:00.418398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.528 ms 00:23:11.011 [2024-06-10 10:12:00.418412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.011 [2024-06-10 10:12:00.457027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.011 [2024-06-10 10:12:00.457096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:11.011 [2024-06-10 10:12:00.457118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.532 ms 00:23:11.011 [2024-06-10 10:12:00.457132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.011 [2024-06-10 10:12:00.495036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.011 [2024-06-10 10:12:00.495096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:11.011 [2024-06-10 10:12:00.495115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.761 ms 00:23:11.011 [2024-06-10 10:12:00.495127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.011 [2024-06-10 10:12:00.495189] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:11.011 [2024-06-10 10:12:00.495222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:11.011 [2024-06-10 10:12:00.495693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.495992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:11.012 [2024-06-10 10:12:00.496457] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:11.012 [2024-06-10 10:12:00.496469] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ef5368c-bce2-41c6-87e9-246a186c5c8a 00:23:11.012 [2024-06-10 10:12:00.496481] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:11.012 [2024-06-10 10:12:00.496492] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:11.012 [2024-06-10 10:12:00.496517] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:11.012 [2024-06-10 10:12:00.496537] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:11.012 [2024-06-10 10:12:00.496547] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:11.012 [2024-06-10 10:12:00.496559] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:11.012 [2024-06-10 10:12:00.496570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:11.012 [2024-06-10 10:12:00.496580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:11.012 [2024-06-10 10:12:00.496590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:11.012 [2024-06-10 10:12:00.496602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.012 [2024-06-10 10:12:00.496615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:11.012 [2024-06-10 10:12:00.496627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:23:11.012 [2024-06-10 10:12:00.497035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.012 [2024-06-10 10:12:00.513660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.012 [2024-06-10 10:12:00.513834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:11.012 [2024-06-10 10:12:00.513957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.522 ms 00:23:11.012 [2024-06-10 10:12:00.514076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.012 [2024-06-10 10:12:00.514570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.012 [2024-06-10 10:12:00.514714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:11.012 [2024-06-10 10:12:00.514846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:23:11.012 [2024-06-10 10:12:00.514903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.273 [2024-06-10 10:12:00.552440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.273 [2024-06-10 10:12:00.552685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:11.273 [2024-06-10 10:12:00.552824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.273 [2024-06-10 10:12:00.552972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.273 [2024-06-10 10:12:00.553095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.273 [2024-06-10 10:12:00.553149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:11.273 [2024-06-10 10:12:00.553281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.273 [2024-06-10 10:12:00.553335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.273 [2024-06-10 10:12:00.553531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.273 [2024-06-10 10:12:00.553574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:11.273 [2024-06-10 10:12:00.553590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.273 [2024-06-10 10:12:00.553602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.273 [2024-06-10 10:12:00.553628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.273 [2024-06-10 10:12:00.553662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:11.273 [2024-06-10 10:12:00.553677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.273 [2024-06-10 10:12:00.553688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.273 [2024-06-10 10:12:00.653212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.273 [2024-06-10 10:12:00.653288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:11.273 [2024-06-10 10:12:00.653307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.273 [2024-06-10 10:12:00.653319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.273 [2024-06-10 10:12:00.738031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.273 [2024-06-10 10:12:00.738107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:11.273 [2024-06-10 10:12:00.738128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.273 [2024-06-10 10:12:00.738139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.273 [2024-06-10 10:12:00.738217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.273 [2024-06-10 10:12:00.738234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:11.273 [2024-06-10 10:12:00.738258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.273 [2024-06-10 10:12:00.738269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.273 [2024-06-10 10:12:00.738312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.273 [2024-06-10 10:12:00.738327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:11.273 [2024-06-10 10:12:00.738339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.273 [2024-06-10 10:12:00.738350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.273 [2024-06-10 10:12:00.738469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.273 [2024-06-10 10:12:00.738489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:11.273 [2024-06-10 10:12:00.738501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.273 [2024-06-10 10:12:00.738519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.273 [2024-06-10 10:12:00.738570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.273 [2024-06-10 10:12:00.738587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:11.273 [2024-06-10 10:12:00.738599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.273 [2024-06-10 10:12:00.738611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.273 [2024-06-10 10:12:00.738685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.273 [2024-06-10 10:12:00.738704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:11.273 [2024-06-10 10:12:00.738727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.273 [2024-06-10 10:12:00.738745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.273 [2024-06-10 10:12:00.738808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.273 [2024-06-10 10:12:00.738826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:11.273 [2024-06-10 10:12:00.738838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.273 [2024-06-10 10:12:00.738849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.273 [2024-06-10 10:12:00.738993] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 479.648 ms, result 0 00:23:12.649 00:23:12.649 00:23:12.649 10:12:01 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:15.176 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:15.176 10:12:04 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:15.176 [2024-06-10 10:12:04.151385] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:23:15.176 [2024-06-10 10:12:04.151528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82395 ] 00:23:15.176 [2024-06-10 10:12:04.315616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.176 [2024-06-10 10:12:04.500227] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.434 [2024-06-10 10:12:04.807397] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:15.434 [2024-06-10 10:12:04.807484] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:15.694 [2024-06-10 10:12:04.964412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.694 [2024-06-10 10:12:04.964504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:15.694 [2024-06-10 10:12:04.964534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:15.694 [2024-06-10 10:12:04.964550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.694 [2024-06-10 10:12:04.964688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.694 [2024-06-10 10:12:04.964716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:15.694 [2024-06-10 10:12:04.964734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:23:15.694 [2024-06-10 10:12:04.964750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.694 [2024-06-10 10:12:04.964798] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:15.694 [2024-06-10 10:12:04.965832] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:15.694 [2024-06-10 10:12:04.965876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.694 [2024-06-10 10:12:04.965891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:15.694 [2024-06-10 10:12:04.965910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:23:15.694 [2024-06-10 10:12:04.965930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.694 [2024-06-10 10:12:04.967143] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:15.694 [2024-06-10 10:12:04.983570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.694 [2024-06-10 10:12:04.983618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:15.694 [2024-06-10 10:12:04.983660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.428 ms 00:23:15.694 [2024-06-10 10:12:04.983677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.694 [2024-06-10 10:12:04.983753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.694 [2024-06-10 10:12:04.983773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:15.694 [2024-06-10 10:12:04.983786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:15.694 [2024-06-10 10:12:04.983801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.694 [2024-06-10 10:12:04.988105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.694 [2024-06-10 10:12:04.988155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:15.694 [2024-06-10 10:12:04.988171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.209 ms 00:23:15.694 [2024-06-10 10:12:04.988183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.694 [2024-06-10 10:12:04.988287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.694 [2024-06-10 10:12:04.988307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:15.694 [2024-06-10 10:12:04.988324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:15.694 [2024-06-10 10:12:04.988335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.694 [2024-06-10 10:12:04.988402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.694 [2024-06-10 10:12:04.988419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:15.694 [2024-06-10 10:12:04.988431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:15.694 [2024-06-10 10:12:04.988442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.694 [2024-06-10 10:12:04.988476] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:15.694 [2024-06-10 10:12:04.992784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.694 [2024-06-10 10:12:04.992822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:15.694 [2024-06-10 10:12:04.992838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.317 ms 00:23:15.694 [2024-06-10 10:12:04.992849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.694 [2024-06-10 10:12:04.992893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.694 [2024-06-10 10:12:04.992912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:15.694 [2024-06-10 10:12:04.992925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:15.694 [2024-06-10 10:12:04.992936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.694 [2024-06-10 10:12:04.992982] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:15.694 [2024-06-10 10:12:04.993012] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:15.694 [2024-06-10 10:12:04.993055] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:15.694 [2024-06-10 10:12:04.993076] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:15.694 [2024-06-10 10:12:04.993187] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:15.694 [2024-06-10 10:12:04.993202] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:15.694 [2024-06-10 10:12:04.993218] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:15.694 [2024-06-10 10:12:04.993233] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:15.694 [2024-06-10 10:12:04.993247] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:15.694 [2024-06-10 10:12:04.993259] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:15.694 [2024-06-10 10:12:04.993270] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:15.694 [2024-06-10 10:12:04.993281] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:15.694 [2024-06-10 10:12:04.993291] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:15.694 [2024-06-10 10:12:04.993303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.694 [2024-06-10 10:12:04.993315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:15.694 [2024-06-10 10:12:04.993330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:23:15.694 [2024-06-10 10:12:04.993341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.694 [2024-06-10 10:12:04.993437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.694 [2024-06-10 10:12:04.993452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:15.694 [2024-06-10 10:12:04.993464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:15.694 [2024-06-10 10:12:04.993475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.694 [2024-06-10 10:12:04.993580] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:15.694 [2024-06-10 10:12:04.993596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:15.694 [2024-06-10 10:12:04.993608] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:15.694 [2024-06-10 10:12:04.993624] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.694 [2024-06-10 10:12:04.993636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:15.694 [2024-06-10 10:12:04.993670] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:15.694 [2024-06-10 10:12:04.993681] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:15.694 [2024-06-10 10:12:04.993694] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:15.694 [2024-06-10 10:12:04.993705] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:15.694 [2024-06-10 10:12:04.993725] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:15.694 [2024-06-10 10:12:04.993735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:15.694 [2024-06-10 10:12:04.993746] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:15.694 [2024-06-10 10:12:04.993756] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:15.694 [2024-06-10 10:12:04.993767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:15.694 [2024-06-10 10:12:04.993777] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:15.694 [2024-06-10 10:12:04.993787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.694 [2024-06-10 10:12:04.993797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:15.694 [2024-06-10 10:12:04.993811] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:15.694 [2024-06-10 10:12:04.993821] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.694 [2024-06-10 10:12:04.993833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:15.694 [2024-06-10 10:12:04.993845] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:15.694 [2024-06-10 10:12:04.993856] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:15.694 [2024-06-10 10:12:04.993880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:15.694 [2024-06-10 10:12:04.993891] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:15.694 [2024-06-10 10:12:04.993901] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:15.694 [2024-06-10 10:12:04.993911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:15.694 [2024-06-10 10:12:04.993921] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:15.694 [2024-06-10 10:12:04.993931] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:15.694 [2024-06-10 10:12:04.993941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:15.694 [2024-06-10 10:12:04.993951] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:15.694 [2024-06-10 10:12:04.993961] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:15.694 [2024-06-10 10:12:04.993971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:15.695 [2024-06-10 10:12:04.993981] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:15.695 [2024-06-10 10:12:04.993991] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:15.695 [2024-06-10 10:12:04.994001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:15.695 [2024-06-10 10:12:04.994011] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:15.695 [2024-06-10 10:12:04.994021] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:15.695 [2024-06-10 10:12:04.994031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:15.695 [2024-06-10 10:12:04.994042] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:15.695 [2024-06-10 10:12:04.994052] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.695 [2024-06-10 10:12:04.994062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:15.695 [2024-06-10 10:12:04.994072] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:15.695 [2024-06-10 10:12:04.994081] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.695 [2024-06-10 10:12:04.994091] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:15.695 [2024-06-10 10:12:04.994102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:15.695 [2024-06-10 10:12:04.994113] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:15.695 [2024-06-10 10:12:04.994124] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.695 [2024-06-10 10:12:04.994134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:15.695 [2024-06-10 10:12:04.994145] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:15.695 [2024-06-10 10:12:04.994155] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:15.695 [2024-06-10 10:12:04.994165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:15.695 [2024-06-10 10:12:04.994175] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:15.695 [2024-06-10 10:12:04.994187] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:15.695 [2024-06-10 10:12:04.994199] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:15.695 [2024-06-10 10:12:04.994213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:15.695 [2024-06-10 10:12:04.994226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:15.695 [2024-06-10 10:12:04.994237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:15.695 [2024-06-10 10:12:04.994248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:15.695 [2024-06-10 10:12:04.994259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:15.695 [2024-06-10 10:12:04.994271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:15.695 [2024-06-10 10:12:04.994282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:15.695 [2024-06-10 10:12:04.994293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:15.695 [2024-06-10 10:12:04.994304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:15.695 [2024-06-10 10:12:04.994314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:15.695 [2024-06-10 10:12:04.994326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:15.695 [2024-06-10 10:12:04.994336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:15.695 [2024-06-10 10:12:04.994347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:15.695 [2024-06-10 10:12:04.994358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:15.695 [2024-06-10 10:12:04.994369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:15.695 [2024-06-10 10:12:04.994380] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:15.695 [2024-06-10 10:12:04.994393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:15.695 [2024-06-10 10:12:04.994405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:15.695 [2024-06-10 10:12:04.994416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:15.695 [2024-06-10 10:12:04.994427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:15.695 [2024-06-10 10:12:04.994438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:15.695 [2024-06-10 10:12:04.994450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.695 [2024-06-10 10:12:04.994462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:15.695 [2024-06-10 10:12:04.994478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:23:15.695 [2024-06-10 10:12:04.994489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.695 [2024-06-10 10:12:05.035455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.695 [2024-06-10 10:12:05.035525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:15.695 [2024-06-10 10:12:05.035547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.879 ms 00:23:15.695 [2024-06-10 10:12:05.035560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.695 [2024-06-10 10:12:05.035875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.695 [2024-06-10 10:12:05.035938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:15.695 [2024-06-10 10:12:05.035959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:23:15.695 [2024-06-10 10:12:05.035972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.695 [2024-06-10 10:12:05.074405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.695 [2024-06-10 10:12:05.074458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:15.695 [2024-06-10 10:12:05.074478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.336 ms 00:23:15.695 [2024-06-10 10:12:05.074490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.695 [2024-06-10 10:12:05.074559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.695 [2024-06-10 10:12:05.074575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:15.695 [2024-06-10 10:12:05.074588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:15.695 [2024-06-10 10:12:05.074599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.695 [2024-06-10 10:12:05.075005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.695 [2024-06-10 10:12:05.075029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:15.695 [2024-06-10 10:12:05.075042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:23:15.695 [2024-06-10 10:12:05.075052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.695 [2024-06-10 10:12:05.075219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.695 [2024-06-10 10:12:05.075239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:15.695 [2024-06-10 10:12:05.075252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:23:15.695 [2024-06-10 10:12:05.075262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.695 [2024-06-10 10:12:05.092057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.695 [2024-06-10 10:12:05.092113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:15.695 [2024-06-10 10:12:05.092132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.756 ms 00:23:15.695 [2024-06-10 10:12:05.092145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.695 [2024-06-10 10:12:05.108363] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:15.695 [2024-06-10 10:12:05.108408] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:15.695 [2024-06-10 10:12:05.108432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.695 [2024-06-10 10:12:05.108444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:15.695 [2024-06-10 10:12:05.108457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.127 ms 00:23:15.695 [2024-06-10 10:12:05.108469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.695 [2024-06-10 10:12:05.138217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.695 [2024-06-10 10:12:05.138318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:15.695 [2024-06-10 10:12:05.138342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.695 ms 00:23:15.695 [2024-06-10 10:12:05.138368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.695 [2024-06-10 10:12:05.154542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.695 [2024-06-10 10:12:05.154601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:15.695 [2024-06-10 10:12:05.154619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.083 ms 00:23:15.695 [2024-06-10 10:12:05.154630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.695 [2024-06-10 10:12:05.170708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.695 [2024-06-10 10:12:05.170765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:15.695 [2024-06-10 10:12:05.170787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.018 ms 00:23:15.695 [2024-06-10 10:12:05.170803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.695 [2024-06-10 10:12:05.171861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.695 [2024-06-10 10:12:05.171908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:15.695 [2024-06-10 10:12:05.171930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:23:15.695 [2024-06-10 10:12:05.171945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.954 [2024-06-10 10:12:05.243983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.954 [2024-06-10 10:12:05.244058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:15.954 [2024-06-10 10:12:05.244079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.001 ms 00:23:15.954 [2024-06-10 10:12:05.244091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.954 [2024-06-10 10:12:05.257087] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:15.954 [2024-06-10 10:12:05.259753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.954 [2024-06-10 10:12:05.259795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:15.954 [2024-06-10 10:12:05.259815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.581 ms 00:23:15.954 [2024-06-10 10:12:05.259826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.954 [2024-06-10 10:12:05.259938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.954 [2024-06-10 10:12:05.259958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:15.954 [2024-06-10 10:12:05.259971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:15.954 [2024-06-10 10:12:05.259982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.954 [2024-06-10 10:12:05.260070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.954 [2024-06-10 10:12:05.260088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:15.954 [2024-06-10 10:12:05.260106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:15.954 [2024-06-10 10:12:05.260118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.954 [2024-06-10 10:12:05.260149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.954 [2024-06-10 10:12:05.260164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:15.954 [2024-06-10 10:12:05.260176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:15.954 [2024-06-10 10:12:05.260186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.954 [2024-06-10 10:12:05.260227] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:15.954 [2024-06-10 10:12:05.260242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.954 [2024-06-10 10:12:05.260254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:15.954 [2024-06-10 10:12:05.260266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:15.954 [2024-06-10 10:12:05.260280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.954 [2024-06-10 10:12:05.291740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.954 [2024-06-10 10:12:05.291804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:15.954 [2024-06-10 10:12:05.291825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.434 ms 00:23:15.954 [2024-06-10 10:12:05.291837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.954 [2024-06-10 10:12:05.291927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.954 [2024-06-10 10:12:05.291947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:15.954 [2024-06-10 10:12:05.291967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:15.954 [2024-06-10 10:12:05.291978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.954 [2024-06-10 10:12:05.293080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 328.164 ms, result 0 00:23:54.065  Copying: 26/1024 [MB] (26 MBps) Copying: 54/1024 [MB] (28 MBps) Copying: 82/1024 [MB] (28 MBps) Copying: 109/1024 [MB] (26 MBps) Copying: 136/1024 [MB] (27 MBps) Copying: 164/1024 [MB] (27 MBps) Copying: 190/1024 [MB] (26 MBps) Copying: 215/1024 [MB] (24 MBps) Copying: 241/1024 [MB] (26 MBps) Copying: 268/1024 [MB] (26 MBps) Copying: 296/1024 [MB] (28 MBps) Copying: 323/1024 [MB] (26 MBps) Copying: 351/1024 [MB] (28 MBps) Copying: 377/1024 [MB] (26 MBps) Copying: 404/1024 [MB] (26 MBps) Copying: 432/1024 [MB] (27 MBps) Copying: 460/1024 [MB] (28 MBps) Copying: 490/1024 [MB] (29 MBps) Copying: 520/1024 [MB] (30 MBps) Copying: 550/1024 [MB] (29 MBps) Copying: 580/1024 [MB] (30 MBps) Copying: 609/1024 [MB] (28 MBps) Copying: 636/1024 [MB] (27 MBps) Copying: 664/1024 [MB] (27 MBps) Copying: 689/1024 [MB] (24 MBps) Copying: 715/1024 [MB] (26 MBps) Copying: 744/1024 [MB] (28 MBps) Copying: 772/1024 [MB] (28 MBps) Copying: 799/1024 [MB] (26 MBps) Copying: 828/1024 [MB] (28 MBps) Copying: 856/1024 [MB] (27 MBps) Copying: 884/1024 [MB] (27 MBps) Copying: 913/1024 [MB] (28 MBps) Copying: 938/1024 [MB] (25 MBps) Copying: 966/1024 [MB] (28 MBps) Copying: 994/1024 [MB] (27 MBps) Copying: 1023/1024 [MB] (28 MBps) Copying: 1048444/1048576 [kB] (856 kBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-06-10 10:12:43.479106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.065 [2024-06-10 10:12:43.479333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:54.065 [2024-06-10 10:12:43.479514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:54.065 [2024-06-10 10:12:43.479692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.065 [2024-06-10 10:12:43.480991] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:54.065 [2024-06-10 10:12:43.485792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.065 [2024-06-10 10:12:43.485974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:54.065 [2024-06-10 10:12:43.486112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.596 ms 00:23:54.065 [2024-06-10 10:12:43.486243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.065 [2024-06-10 10:12:43.500049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.065 [2024-06-10 10:12:43.500348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:54.065 [2024-06-10 10:12:43.500535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.565 ms 00:23:54.065 [2024-06-10 10:12:43.500601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.065 [2024-06-10 10:12:43.525285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.065 [2024-06-10 10:12:43.525526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:54.065 [2024-06-10 10:12:43.525571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.564 ms 00:23:54.065 [2024-06-10 10:12:43.525585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.065 [2024-06-10 10:12:43.532338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.065 [2024-06-10 10:12:43.532391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:54.065 [2024-06-10 10:12:43.532406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.702 ms 00:23:54.065 [2024-06-10 10:12:43.532417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.065 [2024-06-10 10:12:43.565031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.065 [2024-06-10 10:12:43.565086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:54.065 [2024-06-10 10:12:43.565105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.559 ms 00:23:54.065 [2024-06-10 10:12:43.565117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.325 [2024-06-10 10:12:43.584079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.325 [2024-06-10 10:12:43.584152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:54.325 [2024-06-10 10:12:43.584173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.911 ms 00:23:54.325 [2024-06-10 10:12:43.584197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.325 [2024-06-10 10:12:43.665698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.325 [2024-06-10 10:12:43.665814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:54.325 [2024-06-10 10:12:43.665837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.424 ms 00:23:54.325 [2024-06-10 10:12:43.665850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.325 [2024-06-10 10:12:43.697927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.325 [2024-06-10 10:12:43.698010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:54.325 [2024-06-10 10:12:43.698030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.050 ms 00:23:54.325 [2024-06-10 10:12:43.698042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.325 [2024-06-10 10:12:43.731788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.325 [2024-06-10 10:12:43.731862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:54.325 [2024-06-10 10:12:43.731884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.686 ms 00:23:54.325 [2024-06-10 10:12:43.731895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.325 [2024-06-10 10:12:43.763226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.325 [2024-06-10 10:12:43.763283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:54.325 [2024-06-10 10:12:43.763303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.259 ms 00:23:54.325 [2024-06-10 10:12:43.763314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.325 [2024-06-10 10:12:43.795235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.325 [2024-06-10 10:12:43.795316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:54.325 [2024-06-10 10:12:43.795337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.797 ms 00:23:54.325 [2024-06-10 10:12:43.795348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.325 [2024-06-10 10:12:43.795422] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:54.325 [2024-06-10 10:12:43.795463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 120576 / 261120 wr_cnt: 1 state: open 00:23:54.325 [2024-06-10 10:12:43.795478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:54.325 [2024-06-10 10:12:43.795919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.795930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.795941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.795952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.795964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.795975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.795986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.795998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:54.326 [2024-06-10 10:12:43.796700] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:54.326 [2024-06-10 10:12:43.796711] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ef5368c-bce2-41c6-87e9-246a186c5c8a 00:23:54.326 [2024-06-10 10:12:43.796722] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 120576 00:23:54.326 [2024-06-10 10:12:43.796733] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 121536 00:23:54.326 [2024-06-10 10:12:43.796744] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 120576 00:23:54.326 [2024-06-10 10:12:43.796756] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0080 00:23:54.326 [2024-06-10 10:12:43.796766] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:54.326 [2024-06-10 10:12:43.796777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:54.326 [2024-06-10 10:12:43.796788] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:54.326 [2024-06-10 10:12:43.796798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:54.326 [2024-06-10 10:12:43.796808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:54.326 [2024-06-10 10:12:43.796820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.326 [2024-06-10 10:12:43.796831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:54.326 [2024-06-10 10:12:43.796851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.400 ms 00:23:54.326 [2024-06-10 10:12:43.796862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.326 [2024-06-10 10:12:43.813499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.326 [2024-06-10 10:12:43.813579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:54.326 [2024-06-10 10:12:43.813599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.572 ms 00:23:54.326 [2024-06-10 10:12:43.813612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.326 [2024-06-10 10:12:43.814155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.326 [2024-06-10 10:12:43.814186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:54.326 [2024-06-10 10:12:43.814201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:23:54.326 [2024-06-10 10:12:43.814213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.584 [2024-06-10 10:12:43.851567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.584 [2024-06-10 10:12:43.851689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:54.584 [2024-06-10 10:12:43.851711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.584 [2024-06-10 10:12:43.851724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.584 [2024-06-10 10:12:43.851843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.584 [2024-06-10 10:12:43.851858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:54.584 [2024-06-10 10:12:43.851870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.584 [2024-06-10 10:12:43.851882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.584 [2024-06-10 10:12:43.852021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.584 [2024-06-10 10:12:43.852040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:54.584 [2024-06-10 10:12:43.852052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.584 [2024-06-10 10:12:43.852063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.584 [2024-06-10 10:12:43.852088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.584 [2024-06-10 10:12:43.852108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:54.584 [2024-06-10 10:12:43.852120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.584 [2024-06-10 10:12:43.852130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.584 [2024-06-10 10:12:43.950514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.584 [2024-06-10 10:12:43.950585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:54.584 [2024-06-10 10:12:43.950604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.584 [2024-06-10 10:12:43.950616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.584 [2024-06-10 10:12:44.035114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.585 [2024-06-10 10:12:44.035205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:54.585 [2024-06-10 10:12:44.035225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.585 [2024-06-10 10:12:44.035237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.585 [2024-06-10 10:12:44.035317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.585 [2024-06-10 10:12:44.035334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:54.585 [2024-06-10 10:12:44.035346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.585 [2024-06-10 10:12:44.035358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.585 [2024-06-10 10:12:44.035403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.585 [2024-06-10 10:12:44.035416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:54.585 [2024-06-10 10:12:44.035433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.585 [2024-06-10 10:12:44.035445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.585 [2024-06-10 10:12:44.035564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.585 [2024-06-10 10:12:44.035583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:54.585 [2024-06-10 10:12:44.035596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.585 [2024-06-10 10:12:44.035607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.585 [2024-06-10 10:12:44.035692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.585 [2024-06-10 10:12:44.035711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:54.585 [2024-06-10 10:12:44.035724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.585 [2024-06-10 10:12:44.035741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.585 [2024-06-10 10:12:44.035786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.585 [2024-06-10 10:12:44.035801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:54.585 [2024-06-10 10:12:44.035813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.585 [2024-06-10 10:12:44.035824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.585 [2024-06-10 10:12:44.035873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.585 [2024-06-10 10:12:44.035889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:54.585 [2024-06-10 10:12:44.035905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.585 [2024-06-10 10:12:44.035916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.585 [2024-06-10 10:12:44.036075] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 559.980 ms, result 0 00:23:56.486 00:23:56.486 00:23:56.486 10:12:45 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:56.486 [2024-06-10 10:12:45.668864] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:23:56.486 [2024-06-10 10:12:45.669039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82798 ] 00:23:56.486 [2024-06-10 10:12:45.834872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.744 [2024-06-10 10:12:46.019370] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.003 [2024-06-10 10:12:46.356769] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:57.003 [2024-06-10 10:12:46.356864] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:57.003 [2024-06-10 10:12:46.510935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.003 [2024-06-10 10:12:46.511009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:57.003 [2024-06-10 10:12:46.511031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:57.003 [2024-06-10 10:12:46.511044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.003 [2024-06-10 10:12:46.511136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.003 [2024-06-10 10:12:46.511158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:57.003 [2024-06-10 10:12:46.511188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:57.003 [2024-06-10 10:12:46.511202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.003 [2024-06-10 10:12:46.511242] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:57.003 [2024-06-10 10:12:46.512192] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:57.003 [2024-06-10 10:12:46.512237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.003 [2024-06-10 10:12:46.512253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:57.003 [2024-06-10 10:12:46.512270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:23:57.003 [2024-06-10 10:12:46.512282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.003 [2024-06-10 10:12:46.513412] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:57.263 [2024-06-10 10:12:46.529711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.263 [2024-06-10 10:12:46.529754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:57.263 [2024-06-10 10:12:46.529771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.300 ms 00:23:57.263 [2024-06-10 10:12:46.529783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.263 [2024-06-10 10:12:46.529862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.263 [2024-06-10 10:12:46.529891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:57.263 [2024-06-10 10:12:46.529906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:57.263 [2024-06-10 10:12:46.529922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.263 [2024-06-10 10:12:46.534556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.263 [2024-06-10 10:12:46.534623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:57.263 [2024-06-10 10:12:46.534666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.536 ms 00:23:57.263 [2024-06-10 10:12:46.534688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.263 [2024-06-10 10:12:46.534835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.263 [2024-06-10 10:12:46.534870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:57.263 [2024-06-10 10:12:46.534902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:23:57.263 [2024-06-10 10:12:46.534922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.263 [2024-06-10 10:12:46.535026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.263 [2024-06-10 10:12:46.535059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:57.263 [2024-06-10 10:12:46.535084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:57.263 [2024-06-10 10:12:46.535100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.263 [2024-06-10 10:12:46.535140] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:57.263 [2024-06-10 10:12:46.540060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.263 [2024-06-10 10:12:46.540100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:57.263 [2024-06-10 10:12:46.540117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.929 ms 00:23:57.263 [2024-06-10 10:12:46.540128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.263 [2024-06-10 10:12:46.540182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.263 [2024-06-10 10:12:46.540203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:57.263 [2024-06-10 10:12:46.540216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:57.263 [2024-06-10 10:12:46.540226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.263 [2024-06-10 10:12:46.540295] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:57.263 [2024-06-10 10:12:46.540326] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:57.263 [2024-06-10 10:12:46.540377] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:57.263 [2024-06-10 10:12:46.540398] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:57.263 [2024-06-10 10:12:46.540515] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:57.263 [2024-06-10 10:12:46.540530] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:57.263 [2024-06-10 10:12:46.540544] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:57.263 [2024-06-10 10:12:46.540560] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:57.263 [2024-06-10 10:12:46.540573] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:57.263 [2024-06-10 10:12:46.540585] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:57.263 [2024-06-10 10:12:46.540596] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:57.263 [2024-06-10 10:12:46.540607] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:57.263 [2024-06-10 10:12:46.540617] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:57.263 [2024-06-10 10:12:46.540628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.263 [2024-06-10 10:12:46.540655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:57.263 [2024-06-10 10:12:46.540674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:23:57.263 [2024-06-10 10:12:46.540686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.263 [2024-06-10 10:12:46.540784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.263 [2024-06-10 10:12:46.540799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:57.263 [2024-06-10 10:12:46.540811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:57.263 [2024-06-10 10:12:46.540821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.263 [2024-06-10 10:12:46.540938] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:57.263 [2024-06-10 10:12:46.540968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:57.263 [2024-06-10 10:12:46.540986] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:57.263 [2024-06-10 10:12:46.541013] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.263 [2024-06-10 10:12:46.541030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:57.263 [2024-06-10 10:12:46.541041] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:57.263 [2024-06-10 10:12:46.541053] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:57.263 [2024-06-10 10:12:46.541063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:57.263 [2024-06-10 10:12:46.541073] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:57.263 [2024-06-10 10:12:46.541084] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:57.263 [2024-06-10 10:12:46.541094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:57.264 [2024-06-10 10:12:46.541104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:57.264 [2024-06-10 10:12:46.541114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:57.264 [2024-06-10 10:12:46.541124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:57.264 [2024-06-10 10:12:46.541134] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:57.264 [2024-06-10 10:12:46.541144] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.264 [2024-06-10 10:12:46.541154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:57.264 [2024-06-10 10:12:46.541164] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:57.264 [2024-06-10 10:12:46.541174] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.264 [2024-06-10 10:12:46.541184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:57.264 [2024-06-10 10:12:46.541194] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:57.264 [2024-06-10 10:12:46.541204] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.264 [2024-06-10 10:12:46.541228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:57.264 [2024-06-10 10:12:46.541239] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:57.264 [2024-06-10 10:12:46.541249] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.264 [2024-06-10 10:12:46.541258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:57.264 [2024-06-10 10:12:46.541268] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:57.264 [2024-06-10 10:12:46.541278] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.264 [2024-06-10 10:12:46.541288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:57.264 [2024-06-10 10:12:46.541298] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:57.264 [2024-06-10 10:12:46.541308] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.264 [2024-06-10 10:12:46.541318] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:57.264 [2024-06-10 10:12:46.541328] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:57.264 [2024-06-10 10:12:46.541339] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:57.264 [2024-06-10 10:12:46.541349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:57.264 [2024-06-10 10:12:46.541359] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:57.264 [2024-06-10 10:12:46.541369] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:57.264 [2024-06-10 10:12:46.541379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:57.264 [2024-06-10 10:12:46.541389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:57.264 [2024-06-10 10:12:46.541399] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.264 [2024-06-10 10:12:46.541409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:57.264 [2024-06-10 10:12:46.541419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:57.264 [2024-06-10 10:12:46.541429] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.264 [2024-06-10 10:12:46.541438] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:57.264 [2024-06-10 10:12:46.541449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:57.264 [2024-06-10 10:12:46.541460] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:57.264 [2024-06-10 10:12:46.541471] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.264 [2024-06-10 10:12:46.541482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:57.264 [2024-06-10 10:12:46.541492] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:57.264 [2024-06-10 10:12:46.541502] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:57.264 [2024-06-10 10:12:46.541513] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:57.264 [2024-06-10 10:12:46.541523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:57.264 [2024-06-10 10:12:46.541533] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:57.264 [2024-06-10 10:12:46.541545] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:57.264 [2024-06-10 10:12:46.541559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:57.264 [2024-06-10 10:12:46.541572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:57.264 [2024-06-10 10:12:46.541583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:57.264 [2024-06-10 10:12:46.541594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:57.264 [2024-06-10 10:12:46.541605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:57.264 [2024-06-10 10:12:46.541616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:57.264 [2024-06-10 10:12:46.541627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:57.264 [2024-06-10 10:12:46.541654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:57.264 [2024-06-10 10:12:46.541668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:57.264 [2024-06-10 10:12:46.541679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:57.264 [2024-06-10 10:12:46.541690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:57.264 [2024-06-10 10:12:46.541701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:57.264 [2024-06-10 10:12:46.541712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:57.264 [2024-06-10 10:12:46.541724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:57.264 [2024-06-10 10:12:46.541736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:57.264 [2024-06-10 10:12:46.541747] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:57.264 [2024-06-10 10:12:46.541759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:57.264 [2024-06-10 10:12:46.541771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:57.264 [2024-06-10 10:12:46.541783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:57.264 [2024-06-10 10:12:46.541794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:57.264 [2024-06-10 10:12:46.541805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:57.264 [2024-06-10 10:12:46.541817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.264 [2024-06-10 10:12:46.541828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:57.264 [2024-06-10 10:12:46.541845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:23:57.264 [2024-06-10 10:12:46.541855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.264 [2024-06-10 10:12:46.582262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.264 [2024-06-10 10:12:46.582326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:57.264 [2024-06-10 10:12:46.582346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.327 ms 00:23:57.264 [2024-06-10 10:12:46.582358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.264 [2024-06-10 10:12:46.582481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.264 [2024-06-10 10:12:46.582498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:57.264 [2024-06-10 10:12:46.582516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:57.264 [2024-06-10 10:12:46.582527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.264 [2024-06-10 10:12:46.621325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.264 [2024-06-10 10:12:46.621379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:57.264 [2024-06-10 10:12:46.621397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.707 ms 00:23:57.265 [2024-06-10 10:12:46.621409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.265 [2024-06-10 10:12:46.621479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.265 [2024-06-10 10:12:46.621496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:57.265 [2024-06-10 10:12:46.621509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:57.265 [2024-06-10 10:12:46.621519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.265 [2024-06-10 10:12:46.621924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.265 [2024-06-10 10:12:46.621955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:57.265 [2024-06-10 10:12:46.621969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:23:57.265 [2024-06-10 10:12:46.621980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.265 [2024-06-10 10:12:46.622136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.265 [2024-06-10 10:12:46.622155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:57.265 [2024-06-10 10:12:46.622168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:23:57.265 [2024-06-10 10:12:46.622179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.265 [2024-06-10 10:12:46.638775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.265 [2024-06-10 10:12:46.638843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:57.265 [2024-06-10 10:12:46.638874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.566 ms 00:23:57.265 [2024-06-10 10:12:46.638894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.265 [2024-06-10 10:12:46.660145] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:57.265 [2024-06-10 10:12:46.660215] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:57.265 [2024-06-10 10:12:46.660258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.265 [2024-06-10 10:12:46.660281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:57.265 [2024-06-10 10:12:46.660304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.134 ms 00:23:57.265 [2024-06-10 10:12:46.660327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.265 [2024-06-10 10:12:46.699708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.265 [2024-06-10 10:12:46.699825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:57.265 [2024-06-10 10:12:46.699859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.291 ms 00:23:57.265 [2024-06-10 10:12:46.699903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.265 [2024-06-10 10:12:46.719397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.265 [2024-06-10 10:12:46.719476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:57.265 [2024-06-10 10:12:46.719496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.221 ms 00:23:57.265 [2024-06-10 10:12:46.719508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.265 [2024-06-10 10:12:46.737772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.265 [2024-06-10 10:12:46.737848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:57.265 [2024-06-10 10:12:46.737879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.173 ms 00:23:57.265 [2024-06-10 10:12:46.737899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.265 [2024-06-10 10:12:46.738831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.265 [2024-06-10 10:12:46.738871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:57.265 [2024-06-10 10:12:46.738895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:23:57.265 [2024-06-10 10:12:46.738908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.524 [2024-06-10 10:12:46.816135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.524 [2024-06-10 10:12:46.816202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:57.524 [2024-06-10 10:12:46.816222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.194 ms 00:23:57.524 [2024-06-10 10:12:46.816234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.524 [2024-06-10 10:12:46.829246] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:57.524 [2024-06-10 10:12:46.831985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.524 [2024-06-10 10:12:46.832031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:57.524 [2024-06-10 10:12:46.832049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.668 ms 00:23:57.524 [2024-06-10 10:12:46.832061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.524 [2024-06-10 10:12:46.832192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.524 [2024-06-10 10:12:46.832212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:57.524 [2024-06-10 10:12:46.832226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:57.524 [2024-06-10 10:12:46.832238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.524 [2024-06-10 10:12:46.833812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.524 [2024-06-10 10:12:46.833849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:57.524 [2024-06-10 10:12:46.833870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.519 ms 00:23:57.524 [2024-06-10 10:12:46.833881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.524 [2024-06-10 10:12:46.833919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.524 [2024-06-10 10:12:46.833934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:57.524 [2024-06-10 10:12:46.833946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:57.524 [2024-06-10 10:12:46.833957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.524 [2024-06-10 10:12:46.833997] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:57.524 [2024-06-10 10:12:46.834013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.524 [2024-06-10 10:12:46.834023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:57.524 [2024-06-10 10:12:46.834035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:57.524 [2024-06-10 10:12:46.834050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.524 [2024-06-10 10:12:46.865371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.524 [2024-06-10 10:12:46.865435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:57.524 [2024-06-10 10:12:46.865454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.294 ms 00:23:57.525 [2024-06-10 10:12:46.865466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.525 [2024-06-10 10:12:46.865580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.525 [2024-06-10 10:12:46.865599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:57.525 [2024-06-10 10:12:46.865620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:57.525 [2024-06-10 10:12:46.865632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.525 [2024-06-10 10:12:46.873832] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 361.026 ms, result 0 00:24:37.871  Copying: 25/1024 [MB] (25 MBps) Copying: 52/1024 [MB] (27 MBps) Copying: 80/1024 [MB] (27 MBps) Copying: 104/1024 [MB] (24 MBps) Copying: 130/1024 [MB] (25 MBps) Copying: 153/1024 [MB] (23 MBps) Copying: 178/1024 [MB] (24 MBps) Copying: 204/1024 [MB] (25 MBps) Copying: 229/1024 [MB] (25 MBps) Copying: 257/1024 [MB] (27 MBps) Copying: 284/1024 [MB] (26 MBps) Copying: 311/1024 [MB] (26 MBps) Copying: 334/1024 [MB] (23 MBps) Copying: 359/1024 [MB] (24 MBps) Copying: 383/1024 [MB] (24 MBps) Copying: 408/1024 [MB] (24 MBps) Copying: 433/1024 [MB] (25 MBps) Copying: 458/1024 [MB] (24 MBps) Copying: 482/1024 [MB] (24 MBps) Copying: 510/1024 [MB] (28 MBps) Copying: 535/1024 [MB] (24 MBps) Copying: 557/1024 [MB] (22 MBps) Copying: 582/1024 [MB] (25 MBps) Copying: 610/1024 [MB] (27 MBps) Copying: 636/1024 [MB] (25 MBps) Copying: 662/1024 [MB] (26 MBps) Copying: 688/1024 [MB] (26 MBps) Copying: 715/1024 [MB] (27 MBps) Copying: 740/1024 [MB] (25 MBps) Copying: 767/1024 [MB] (26 MBps) Copying: 794/1024 [MB] (27 MBps) Copying: 822/1024 [MB] (28 MBps) Copying: 846/1024 [MB] (23 MBps) Copying: 871/1024 [MB] (25 MBps) Copying: 897/1024 [MB] (26 MBps) Copying: 921/1024 [MB] (23 MBps) Copying: 948/1024 [MB] (27 MBps) Copying: 975/1024 [MB] (26 MBps) Copying: 1001/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-06-10 10:13:27.386906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.871 [2024-06-10 10:13:27.387007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:37.871 [2024-06-10 10:13:27.387035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:37.871 [2024-06-10 10:13:27.387052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.871 [2024-06-10 10:13:27.387094] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:38.130 [2024-06-10 10:13:27.391990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.130 [2024-06-10 10:13:27.392044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:38.130 [2024-06-10 10:13:27.392063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.864 ms 00:24:38.130 [2024-06-10 10:13:27.392077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.130 [2024-06-10 10:13:27.392368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.130 [2024-06-10 10:13:27.392399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:38.130 [2024-06-10 10:13:27.392416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:24:38.130 [2024-06-10 10:13:27.392429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.130 [2024-06-10 10:13:27.398411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.130 [2024-06-10 10:13:27.398470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:38.130 [2024-06-10 10:13:27.398500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.956 ms 00:24:38.130 [2024-06-10 10:13:27.398514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.130 [2024-06-10 10:13:27.406807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.130 [2024-06-10 10:13:27.406853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:38.130 [2024-06-10 10:13:27.406871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.207 ms 00:24:38.130 [2024-06-10 10:13:27.406885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.130 [2024-06-10 10:13:27.446378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.130 [2024-06-10 10:13:27.446454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:38.130 [2024-06-10 10:13:27.446476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.429 ms 00:24:38.130 [2024-06-10 10:13:27.446490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.130 [2024-06-10 10:13:27.468393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.130 [2024-06-10 10:13:27.468495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:38.130 [2024-06-10 10:13:27.468519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.830 ms 00:24:38.130 [2024-06-10 10:13:27.468552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.130 [2024-06-10 10:13:27.566044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.131 [2024-06-10 10:13:27.566123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:38.131 [2024-06-10 10:13:27.566148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.409 ms 00:24:38.131 [2024-06-10 10:13:27.566163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.131 [2024-06-10 10:13:27.606924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.131 [2024-06-10 10:13:27.607024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:38.131 [2024-06-10 10:13:27.607049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.727 ms 00:24:38.131 [2024-06-10 10:13:27.607064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.131 [2024-06-10 10:13:27.646412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.131 [2024-06-10 10:13:27.646503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:38.131 [2024-06-10 10:13:27.646526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.249 ms 00:24:38.131 [2024-06-10 10:13:27.646539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.391 [2024-06-10 10:13:27.684152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.391 [2024-06-10 10:13:27.684224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:38.391 [2024-06-10 10:13:27.684245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.530 ms 00:24:38.391 [2024-06-10 10:13:27.684263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.391 [2024-06-10 10:13:27.715760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.391 [2024-06-10 10:13:27.715831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:38.391 [2024-06-10 10:13:27.715853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.357 ms 00:24:38.391 [2024-06-10 10:13:27.715864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.391 [2024-06-10 10:13:27.715929] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:38.391 [2024-06-10 10:13:27.715964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:24:38.391 [2024-06-10 10:13:27.715979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.715992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:38.391 [2024-06-10 10:13:27.716565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.716987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.717000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.717014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.717027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.717040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.717054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.717067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.717080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.717094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.717108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.717121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.717134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.717148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.717161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.717174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.717187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.717201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:38.392 [2024-06-10 10:13:27.717225] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:38.392 [2024-06-10 10:13:27.717238] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ef5368c-bce2-41c6-87e9-246a186c5c8a 00:24:38.392 [2024-06-10 10:13:27.717251] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:24:38.392 [2024-06-10 10:13:27.717264] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 14272 00:24:38.392 [2024-06-10 10:13:27.717276] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 13312 00:24:38.392 [2024-06-10 10:13:27.717289] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0721 00:24:38.392 [2024-06-10 10:13:27.717301] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:38.392 [2024-06-10 10:13:27.717313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:38.392 [2024-06-10 10:13:27.717324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:38.392 [2024-06-10 10:13:27.717335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:38.392 [2024-06-10 10:13:27.717346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:38.392 [2024-06-10 10:13:27.717357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.392 [2024-06-10 10:13:27.717370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:38.392 [2024-06-10 10:13:27.717402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.430 ms 00:24:38.392 [2024-06-10 10:13:27.717413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.392 [2024-06-10 10:13:27.734174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.392 [2024-06-10 10:13:27.734238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:38.392 [2024-06-10 10:13:27.734258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.709 ms 00:24:38.392 [2024-06-10 10:13:27.734269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.392 [2024-06-10 10:13:27.734757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.392 [2024-06-10 10:13:27.734801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:38.392 [2024-06-10 10:13:27.734816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:24:38.392 [2024-06-10 10:13:27.734828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.392 [2024-06-10 10:13:27.774328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.393 [2024-06-10 10:13:27.774390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:38.393 [2024-06-10 10:13:27.774410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.393 [2024-06-10 10:13:27.774422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.393 [2024-06-10 10:13:27.774513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.393 [2024-06-10 10:13:27.774529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:38.393 [2024-06-10 10:13:27.774542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.393 [2024-06-10 10:13:27.774552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.393 [2024-06-10 10:13:27.774665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.393 [2024-06-10 10:13:27.774685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:38.393 [2024-06-10 10:13:27.774698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.393 [2024-06-10 10:13:27.774709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.393 [2024-06-10 10:13:27.774731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.393 [2024-06-10 10:13:27.774751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:38.393 [2024-06-10 10:13:27.774763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.393 [2024-06-10 10:13:27.774773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.393 [2024-06-10 10:13:27.881233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.393 [2024-06-10 10:13:27.881306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:38.393 [2024-06-10 10:13:27.881325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.393 [2024-06-10 10:13:27.881336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.651 [2024-06-10 10:13:27.979370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.651 [2024-06-10 10:13:27.979457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:38.651 [2024-06-10 10:13:27.979487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.651 [2024-06-10 10:13:27.979500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.651 [2024-06-10 10:13:27.979580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.651 [2024-06-10 10:13:27.979596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:38.651 [2024-06-10 10:13:27.979608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.651 [2024-06-10 10:13:27.979620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.651 [2024-06-10 10:13:27.979703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.651 [2024-06-10 10:13:27.979722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:38.651 [2024-06-10 10:13:27.979740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.651 [2024-06-10 10:13:27.979751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.651 [2024-06-10 10:13:27.979874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.651 [2024-06-10 10:13:27.979893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:38.651 [2024-06-10 10:13:27.979906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.651 [2024-06-10 10:13:27.979917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.651 [2024-06-10 10:13:27.979963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.651 [2024-06-10 10:13:27.979980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:38.651 [2024-06-10 10:13:27.979992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.651 [2024-06-10 10:13:27.980009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.651 [2024-06-10 10:13:27.980055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.651 [2024-06-10 10:13:27.980069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:38.651 [2024-06-10 10:13:27.980081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.651 [2024-06-10 10:13:27.980091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.651 [2024-06-10 10:13:27.980171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.651 [2024-06-10 10:13:27.980199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:38.651 [2024-06-10 10:13:27.980219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.651 [2024-06-10 10:13:27.980231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.651 [2024-06-10 10:13:27.980372] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 593.451 ms, result 0 00:24:40.027 00:24:40.027 00:24:40.027 10:13:29 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:42.557 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:42.557 10:13:31 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:42.557 10:13:31 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:42.557 10:13:31 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:42.557 10:13:31 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:42.557 10:13:31 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:42.557 10:13:31 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 81295 00:24:42.557 10:13:31 ftl.ftl_restore -- common/autotest_common.sh@949 -- # '[' -z 81295 ']' 00:24:42.557 10:13:31 ftl.ftl_restore -- common/autotest_common.sh@953 -- # kill -0 81295 00:24:42.557 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 953: kill: (81295) - No such process 00:24:42.557 Process with pid 81295 is not found 00:24:42.557 10:13:31 ftl.ftl_restore -- common/autotest_common.sh@976 -- # echo 'Process with pid 81295 is not found' 00:24:42.557 Remove shared memory files 00:24:42.557 10:13:31 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:42.557 10:13:31 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:42.557 10:13:31 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:42.557 10:13:31 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:42.557 10:13:31 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:42.557 10:13:31 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:42.557 10:13:31 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:42.557 00:24:42.557 real 3m12.148s 00:24:42.557 user 2m57.959s 00:24:42.557 sys 0m16.391s 00:24:42.557 10:13:31 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:42.557 10:13:31 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:42.557 ************************************ 00:24:42.557 END TEST ftl_restore 00:24:42.557 ************************************ 00:24:42.557 10:13:31 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:42.557 10:13:31 ftl -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:24:42.557 10:13:31 ftl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:42.557 10:13:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:42.557 ************************************ 00:24:42.557 START TEST ftl_dirty_shutdown 00:24:42.557 ************************************ 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:42.557 * Looking for test storage... 00:24:42.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:42.557 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=83312 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 83312 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@830 -- # '[' -z 83312 ']' 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:42.558 10:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:42.558 [2024-06-10 10:13:32.010599] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:24:42.558 [2024-06-10 10:13:32.010785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83312 ] 00:24:42.815 [2024-06-10 10:13:32.183605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.072 [2024-06-10 10:13:32.412062] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.006 10:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:44.006 10:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@863 -- # return 0 00:24:44.006 10:13:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:44.006 10:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:44.006 10:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:44.006 10:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:44.006 10:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:44.006 10:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:44.264 10:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:44.264 10:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:44.264 10:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:44.264 10:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local bdev_name=nvme0n1 00:24:44.264 10:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_info 00:24:44.264 10:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bs 00:24:44.264 10:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local nb 00:24:44.264 10:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:44.524 10:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:24:44.524 { 00:24:44.524 "name": "nvme0n1", 00:24:44.524 "aliases": [ 00:24:44.524 "0e9ca6f7-db9a-4b4d-ada6-e2da9ef87523" 00:24:44.524 ], 00:24:44.524 "product_name": "NVMe disk", 00:24:44.524 "block_size": 4096, 00:24:44.524 "num_blocks": 1310720, 00:24:44.524 "uuid": "0e9ca6f7-db9a-4b4d-ada6-e2da9ef87523", 00:24:44.524 "assigned_rate_limits": { 00:24:44.524 "rw_ios_per_sec": 0, 00:24:44.524 "rw_mbytes_per_sec": 0, 00:24:44.524 "r_mbytes_per_sec": 0, 00:24:44.524 "w_mbytes_per_sec": 0 00:24:44.524 }, 00:24:44.524 "claimed": true, 00:24:44.524 "claim_type": "read_many_write_one", 00:24:44.524 "zoned": false, 00:24:44.524 "supported_io_types": { 00:24:44.524 "read": true, 00:24:44.524 "write": true, 00:24:44.524 "unmap": true, 00:24:44.524 "write_zeroes": true, 00:24:44.524 "flush": true, 00:24:44.524 "reset": true, 00:24:44.524 "compare": true, 00:24:44.524 "compare_and_write": false, 00:24:44.524 "abort": true, 00:24:44.524 "nvme_admin": true, 00:24:44.524 "nvme_io": true 00:24:44.524 }, 00:24:44.524 "driver_specific": { 00:24:44.524 "nvme": [ 00:24:44.524 { 00:24:44.524 "pci_address": "0000:00:11.0", 00:24:44.524 "trid": { 00:24:44.524 "trtype": "PCIe", 00:24:44.524 "traddr": "0000:00:11.0" 00:24:44.524 }, 00:24:44.524 "ctrlr_data": { 00:24:44.524 "cntlid": 0, 00:24:44.524 "vendor_id": "0x1b36", 00:24:44.524 "model_number": "QEMU NVMe Ctrl", 00:24:44.524 "serial_number": "12341", 00:24:44.524 "firmware_revision": "8.0.0", 00:24:44.524 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:44.524 "oacs": { 00:24:44.524 "security": 0, 00:24:44.524 "format": 1, 00:24:44.524 "firmware": 0, 00:24:44.524 "ns_manage": 1 00:24:44.524 }, 00:24:44.524 "multi_ctrlr": false, 00:24:44.524 "ana_reporting": false 00:24:44.524 }, 00:24:44.524 "vs": { 00:24:44.524 "nvme_version": "1.4" 00:24:44.524 }, 00:24:44.524 "ns_data": { 00:24:44.524 "id": 1, 00:24:44.524 "can_share": false 00:24:44.524 } 00:24:44.524 } 00:24:44.524 ], 00:24:44.524 "mp_policy": "active_passive" 00:24:44.524 } 00:24:44.524 } 00:24:44.524 ]' 00:24:44.524 10:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:24:44.524 10:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bs=4096 00:24:44.524 10:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:24:44.524 10:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # nb=1310720 00:24:44.524 10:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_size=5120 00:24:44.524 10:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # echo 5120 00:24:44.524 10:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:44.524 10:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:44.524 10:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:44.524 10:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:44.524 10:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:44.783 10:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=99e489d3-349b-4fbc-9b0a-e3a2f8403a3c 00:24:44.783 10:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:44.783 10:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 99e489d3-349b-4fbc-9b0a-e3a2f8403a3c 00:24:45.041 10:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:45.300 10:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=8d58bfa2-f4be-4730-b45d-b6010bef53e9 00:24:45.300 10:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8d58bfa2-f4be-4730-b45d-b6010bef53e9 00:24:45.558 10:13:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=00288acd-6bb8-4340-9d06-4da5751724b6 00:24:45.558 10:13:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:45.558 10:13:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 00288acd-6bb8-4340-9d06-4da5751724b6 00:24:45.558 10:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:45.558 10:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:45.558 10:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=00288acd-6bb8-4340-9d06-4da5751724b6 00:24:45.558 10:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:45.558 10:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 00288acd-6bb8-4340-9d06-4da5751724b6 00:24:45.558 10:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local bdev_name=00288acd-6bb8-4340-9d06-4da5751724b6 00:24:45.558 10:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_info 00:24:45.558 10:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bs 00:24:45.558 10:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local nb 00:24:45.558 10:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 00288acd-6bb8-4340-9d06-4da5751724b6 00:24:46.137 10:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:24:46.137 { 00:24:46.137 "name": "00288acd-6bb8-4340-9d06-4da5751724b6", 00:24:46.137 "aliases": [ 00:24:46.137 "lvs/nvme0n1p0" 00:24:46.137 ], 00:24:46.137 "product_name": "Logical Volume", 00:24:46.137 "block_size": 4096, 00:24:46.137 "num_blocks": 26476544, 00:24:46.137 "uuid": "00288acd-6bb8-4340-9d06-4da5751724b6", 00:24:46.137 "assigned_rate_limits": { 00:24:46.137 "rw_ios_per_sec": 0, 00:24:46.137 "rw_mbytes_per_sec": 0, 00:24:46.137 "r_mbytes_per_sec": 0, 00:24:46.137 "w_mbytes_per_sec": 0 00:24:46.137 }, 00:24:46.137 "claimed": false, 00:24:46.137 "zoned": false, 00:24:46.137 "supported_io_types": { 00:24:46.137 "read": true, 00:24:46.137 "write": true, 00:24:46.137 "unmap": true, 00:24:46.137 "write_zeroes": true, 00:24:46.137 "flush": false, 00:24:46.137 "reset": true, 00:24:46.137 "compare": false, 00:24:46.137 "compare_and_write": false, 00:24:46.137 "abort": false, 00:24:46.137 "nvme_admin": false, 00:24:46.137 "nvme_io": false 00:24:46.137 }, 00:24:46.137 "driver_specific": { 00:24:46.137 "lvol": { 00:24:46.137 "lvol_store_uuid": "8d58bfa2-f4be-4730-b45d-b6010bef53e9", 00:24:46.137 "base_bdev": "nvme0n1", 00:24:46.137 "thin_provision": true, 00:24:46.137 "num_allocated_clusters": 0, 00:24:46.137 "snapshot": false, 00:24:46.138 "clone": false, 00:24:46.138 "esnap_clone": false 00:24:46.138 } 00:24:46.138 } 00:24:46.138 } 00:24:46.138 ]' 00:24:46.138 10:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:24:46.138 10:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bs=4096 00:24:46.138 10:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:24:46.138 10:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # nb=26476544 00:24:46.138 10:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_size=103424 00:24:46.138 10:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # echo 103424 00:24:46.138 10:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:46.138 10:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:46.138 10:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:46.398 10:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:46.398 10:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:46.398 10:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 00288acd-6bb8-4340-9d06-4da5751724b6 00:24:46.398 10:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local bdev_name=00288acd-6bb8-4340-9d06-4da5751724b6 00:24:46.398 10:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_info 00:24:46.398 10:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bs 00:24:46.398 10:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local nb 00:24:46.398 10:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 00288acd-6bb8-4340-9d06-4da5751724b6 00:24:46.657 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:24:46.657 { 00:24:46.657 "name": "00288acd-6bb8-4340-9d06-4da5751724b6", 00:24:46.657 "aliases": [ 00:24:46.657 "lvs/nvme0n1p0" 00:24:46.657 ], 00:24:46.657 "product_name": "Logical Volume", 00:24:46.657 "block_size": 4096, 00:24:46.657 "num_blocks": 26476544, 00:24:46.657 "uuid": "00288acd-6bb8-4340-9d06-4da5751724b6", 00:24:46.657 "assigned_rate_limits": { 00:24:46.657 "rw_ios_per_sec": 0, 00:24:46.657 "rw_mbytes_per_sec": 0, 00:24:46.657 "r_mbytes_per_sec": 0, 00:24:46.657 "w_mbytes_per_sec": 0 00:24:46.657 }, 00:24:46.657 "claimed": false, 00:24:46.657 "zoned": false, 00:24:46.657 "supported_io_types": { 00:24:46.657 "read": true, 00:24:46.657 "write": true, 00:24:46.657 "unmap": true, 00:24:46.657 "write_zeroes": true, 00:24:46.657 "flush": false, 00:24:46.657 "reset": true, 00:24:46.657 "compare": false, 00:24:46.657 "compare_and_write": false, 00:24:46.657 "abort": false, 00:24:46.657 "nvme_admin": false, 00:24:46.657 "nvme_io": false 00:24:46.657 }, 00:24:46.657 "driver_specific": { 00:24:46.657 "lvol": { 00:24:46.657 "lvol_store_uuid": "8d58bfa2-f4be-4730-b45d-b6010bef53e9", 00:24:46.657 "base_bdev": "nvme0n1", 00:24:46.657 "thin_provision": true, 00:24:46.657 "num_allocated_clusters": 0, 00:24:46.657 "snapshot": false, 00:24:46.657 "clone": false, 00:24:46.657 "esnap_clone": false 00:24:46.657 } 00:24:46.657 } 00:24:46.657 } 00:24:46.657 ]' 00:24:46.657 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:24:46.917 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bs=4096 00:24:46.917 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:24:46.917 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # nb=26476544 00:24:46.917 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_size=103424 00:24:46.917 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # echo 103424 00:24:46.917 10:13:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:46.917 10:13:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:47.175 10:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:47.175 10:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 00288acd-6bb8-4340-9d06-4da5751724b6 00:24:47.175 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local bdev_name=00288acd-6bb8-4340-9d06-4da5751724b6 00:24:47.175 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_info 00:24:47.175 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bs 00:24:47.175 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local nb 00:24:47.176 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 00288acd-6bb8-4340-9d06-4da5751724b6 00:24:47.434 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:24:47.434 { 00:24:47.434 "name": "00288acd-6bb8-4340-9d06-4da5751724b6", 00:24:47.434 "aliases": [ 00:24:47.434 "lvs/nvme0n1p0" 00:24:47.434 ], 00:24:47.434 "product_name": "Logical Volume", 00:24:47.434 "block_size": 4096, 00:24:47.434 "num_blocks": 26476544, 00:24:47.434 "uuid": "00288acd-6bb8-4340-9d06-4da5751724b6", 00:24:47.434 "assigned_rate_limits": { 00:24:47.434 "rw_ios_per_sec": 0, 00:24:47.434 "rw_mbytes_per_sec": 0, 00:24:47.434 "r_mbytes_per_sec": 0, 00:24:47.434 "w_mbytes_per_sec": 0 00:24:47.434 }, 00:24:47.434 "claimed": false, 00:24:47.434 "zoned": false, 00:24:47.434 "supported_io_types": { 00:24:47.434 "read": true, 00:24:47.434 "write": true, 00:24:47.434 "unmap": true, 00:24:47.434 "write_zeroes": true, 00:24:47.434 "flush": false, 00:24:47.434 "reset": true, 00:24:47.434 "compare": false, 00:24:47.434 "compare_and_write": false, 00:24:47.434 "abort": false, 00:24:47.434 "nvme_admin": false, 00:24:47.434 "nvme_io": false 00:24:47.434 }, 00:24:47.434 "driver_specific": { 00:24:47.434 "lvol": { 00:24:47.434 "lvol_store_uuid": "8d58bfa2-f4be-4730-b45d-b6010bef53e9", 00:24:47.434 "base_bdev": "nvme0n1", 00:24:47.434 "thin_provision": true, 00:24:47.434 "num_allocated_clusters": 0, 00:24:47.434 "snapshot": false, 00:24:47.434 "clone": false, 00:24:47.434 "esnap_clone": false 00:24:47.434 } 00:24:47.434 } 00:24:47.434 } 00:24:47.434 ]' 00:24:47.434 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:24:47.434 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bs=4096 00:24:47.434 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:24:47.434 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # nb=26476544 00:24:47.434 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_size=103424 00:24:47.434 10:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # echo 103424 00:24:47.434 10:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:47.434 10:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 00288acd-6bb8-4340-9d06-4da5751724b6 --l2p_dram_limit 10' 00:24:47.434 10:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:47.434 10:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:47.434 10:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:47.434 10:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 00288acd-6bb8-4340-9d06-4da5751724b6 --l2p_dram_limit 10 -c nvc0n1p0 00:24:47.693 [2024-06-10 10:13:37.201539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.693 [2024-06-10 10:13:37.201609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:47.693 [2024-06-10 10:13:37.201635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:47.693 [2024-06-10 10:13:37.201665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.693 [2024-06-10 10:13:37.201755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.693 [2024-06-10 10:13:37.201775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:47.693 [2024-06-10 10:13:37.201792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:47.693 [2024-06-10 10:13:37.201805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.693 [2024-06-10 10:13:37.201839] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:47.693 [2024-06-10 10:13:37.202819] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:47.693 [2024-06-10 10:13:37.202867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.693 [2024-06-10 10:13:37.202883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:47.693 [2024-06-10 10:13:37.202902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.035 ms 00:24:47.693 [2024-06-10 10:13:37.202914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.693 [2024-06-10 10:13:37.203061] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 506b159a-1297-4728-99de-db62ad24bd2a 00:24:47.693 [2024-06-10 10:13:37.204149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.694 [2024-06-10 10:13:37.204195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:47.694 [2024-06-10 10:13:37.204213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:47.694 [2024-06-10 10:13:37.204228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.694 [2024-06-10 10:13:37.208989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.694 [2024-06-10 10:13:37.209051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:47.694 [2024-06-10 10:13:37.209074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.701 ms 00:24:47.694 [2024-06-10 10:13:37.209088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.694 [2024-06-10 10:13:37.209225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.694 [2024-06-10 10:13:37.209249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:47.694 [2024-06-10 10:13:37.209263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:24:47.694 [2024-06-10 10:13:37.209278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.694 [2024-06-10 10:13:37.209367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.694 [2024-06-10 10:13:37.209390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:47.694 [2024-06-10 10:13:37.209404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:47.694 [2024-06-10 10:13:37.209421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.694 [2024-06-10 10:13:37.209455] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:47.953 [2024-06-10 10:13:37.214043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.953 [2024-06-10 10:13:37.214088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:47.953 [2024-06-10 10:13:37.214108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.593 ms 00:24:47.953 [2024-06-10 10:13:37.214121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.953 [2024-06-10 10:13:37.214171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.953 [2024-06-10 10:13:37.214187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:47.953 [2024-06-10 10:13:37.214203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:47.953 [2024-06-10 10:13:37.214214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.953 [2024-06-10 10:13:37.214272] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:47.953 [2024-06-10 10:13:37.214436] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:47.953 [2024-06-10 10:13:37.214457] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:47.953 [2024-06-10 10:13:37.214474] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:47.953 [2024-06-10 10:13:37.214494] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:47.953 [2024-06-10 10:13:37.214509] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:47.953 [2024-06-10 10:13:37.214525] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:47.953 [2024-06-10 10:13:37.214537] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:47.953 [2024-06-10 10:13:37.214555] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:47.953 [2024-06-10 10:13:37.214567] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:47.953 [2024-06-10 10:13:37.214581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.953 [2024-06-10 10:13:37.214592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:47.953 [2024-06-10 10:13:37.214606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:24:47.953 [2024-06-10 10:13:37.214618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.953 [2024-06-10 10:13:37.214733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.953 [2024-06-10 10:13:37.214750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:47.953 [2024-06-10 10:13:37.214765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:47.953 [2024-06-10 10:13:37.214777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.953 [2024-06-10 10:13:37.215041] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:47.953 [2024-06-10 10:13:37.215069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:47.953 [2024-06-10 10:13:37.215089] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:47.953 [2024-06-10 10:13:37.215101] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.953 [2024-06-10 10:13:37.215129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:47.953 [2024-06-10 10:13:37.215141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:47.953 [2024-06-10 10:13:37.215155] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:47.953 [2024-06-10 10:13:37.215166] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:47.953 [2024-06-10 10:13:37.215192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:47.953 [2024-06-10 10:13:37.215204] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:47.953 [2024-06-10 10:13:37.215220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:47.953 [2024-06-10 10:13:37.215231] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:47.953 [2024-06-10 10:13:37.215244] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:47.953 [2024-06-10 10:13:37.215255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:47.953 [2024-06-10 10:13:37.215267] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:47.953 [2024-06-10 10:13:37.215278] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.953 [2024-06-10 10:13:37.215291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:47.953 [2024-06-10 10:13:37.215302] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:47.953 [2024-06-10 10:13:37.215318] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.953 [2024-06-10 10:13:37.215329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:47.953 [2024-06-10 10:13:37.215342] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:47.953 [2024-06-10 10:13:37.215353] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:47.953 [2024-06-10 10:13:37.215366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:47.953 [2024-06-10 10:13:37.215377] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:47.953 [2024-06-10 10:13:37.215390] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:47.953 [2024-06-10 10:13:37.215400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:47.953 [2024-06-10 10:13:37.215413] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:47.953 [2024-06-10 10:13:37.215424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:47.953 [2024-06-10 10:13:37.215436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:47.953 [2024-06-10 10:13:37.215447] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:47.953 [2024-06-10 10:13:37.215460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:47.953 [2024-06-10 10:13:37.215471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:47.953 [2024-06-10 10:13:37.215484] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:47.953 [2024-06-10 10:13:37.215495] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:47.953 [2024-06-10 10:13:37.215510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:47.953 [2024-06-10 10:13:37.215521] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:47.953 [2024-06-10 10:13:37.215536] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:47.953 [2024-06-10 10:13:37.215547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:47.953 [2024-06-10 10:13:37.215560] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:47.953 [2024-06-10 10:13:37.215571] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.953 [2024-06-10 10:13:37.215583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:47.953 [2024-06-10 10:13:37.215595] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:47.953 [2024-06-10 10:13:37.215607] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.953 [2024-06-10 10:13:37.215618] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:47.953 [2024-06-10 10:13:37.215631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:47.953 [2024-06-10 10:13:37.215658] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:47.953 [2024-06-10 10:13:37.215675] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.953 [2024-06-10 10:13:37.215687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:47.953 [2024-06-10 10:13:37.215701] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:47.953 [2024-06-10 10:13:37.215712] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:47.953 [2024-06-10 10:13:37.215735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:47.953 [2024-06-10 10:13:37.215746] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:47.953 [2024-06-10 10:13:37.215759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:47.953 [2024-06-10 10:13:37.215774] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:47.953 [2024-06-10 10:13:37.215791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:47.953 [2024-06-10 10:13:37.215806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:47.953 [2024-06-10 10:13:37.215820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:47.954 [2024-06-10 10:13:37.215833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:47.954 [2024-06-10 10:13:37.215846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:47.954 [2024-06-10 10:13:37.215858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:47.954 [2024-06-10 10:13:37.215874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:47.954 [2024-06-10 10:13:37.215886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:47.954 [2024-06-10 10:13:37.215900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:47.954 [2024-06-10 10:13:37.215912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:47.954 [2024-06-10 10:13:37.215927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:47.954 [2024-06-10 10:13:37.215939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:47.954 [2024-06-10 10:13:37.215955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:47.954 [2024-06-10 10:13:37.215967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:47.954 [2024-06-10 10:13:37.215981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:47.954 [2024-06-10 10:13:37.215993] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:47.954 [2024-06-10 10:13:37.216008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:47.954 [2024-06-10 10:13:37.216021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:47.954 [2024-06-10 10:13:37.216035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:47.954 [2024-06-10 10:13:37.216047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:47.954 [2024-06-10 10:13:37.216061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:47.954 [2024-06-10 10:13:37.216074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.954 [2024-06-10 10:13:37.216088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:47.954 [2024-06-10 10:13:37.216100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:24:47.954 [2024-06-10 10:13:37.216113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.954 [2024-06-10 10:13:37.216168] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:47.954 [2024-06-10 10:13:37.216187] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:49.856 [2024-06-10 10:13:39.232389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.856 [2024-06-10 10:13:39.232466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:49.856 [2024-06-10 10:13:39.232489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2016.232 ms 00:24:49.856 [2024-06-10 10:13:39.232506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.856 [2024-06-10 10:13:39.266091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.856 [2024-06-10 10:13:39.266163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:49.856 [2024-06-10 10:13:39.266186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.297 ms 00:24:49.856 [2024-06-10 10:13:39.266201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.856 [2024-06-10 10:13:39.266386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.856 [2024-06-10 10:13:39.266410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:49.856 [2024-06-10 10:13:39.266425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:49.856 [2024-06-10 10:13:39.266443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.856 [2024-06-10 10:13:39.306583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.856 [2024-06-10 10:13:39.306667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:49.856 [2024-06-10 10:13:39.306691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.084 ms 00:24:49.856 [2024-06-10 10:13:39.306707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.856 [2024-06-10 10:13:39.306773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.856 [2024-06-10 10:13:39.306800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:49.856 [2024-06-10 10:13:39.306815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:49.856 [2024-06-10 10:13:39.306828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.856 [2024-06-10 10:13:39.307255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.856 [2024-06-10 10:13:39.307294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:49.856 [2024-06-10 10:13:39.307311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:24:49.856 [2024-06-10 10:13:39.307326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.856 [2024-06-10 10:13:39.307475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.856 [2024-06-10 10:13:39.307502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:49.856 [2024-06-10 10:13:39.307519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:24:49.856 [2024-06-10 10:13:39.307533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.856 [2024-06-10 10:13:39.325833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.856 [2024-06-10 10:13:39.325909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:49.856 [2024-06-10 10:13:39.325932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.269 ms 00:24:49.856 [2024-06-10 10:13:39.325947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.856 [2024-06-10 10:13:39.340523] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:49.856 [2024-06-10 10:13:39.343623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.856 [2024-06-10 10:13:39.343677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:49.856 [2024-06-10 10:13:39.343702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.517 ms 00:24:49.856 [2024-06-10 10:13:39.343715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.115 [2024-06-10 10:13:39.413670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.116 [2024-06-10 10:13:39.413746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:50.116 [2024-06-10 10:13:39.413771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.891 ms 00:24:50.116 [2024-06-10 10:13:39.413796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.116 [2024-06-10 10:13:39.414048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.116 [2024-06-10 10:13:39.414081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:50.116 [2024-06-10 10:13:39.414100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:24:50.116 [2024-06-10 10:13:39.414113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.116 [2024-06-10 10:13:39.445802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.116 [2024-06-10 10:13:39.445866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:50.116 [2024-06-10 10:13:39.445892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.601 ms 00:24:50.116 [2024-06-10 10:13:39.445906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.116 [2024-06-10 10:13:39.476850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.116 [2024-06-10 10:13:39.476911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:50.116 [2024-06-10 10:13:39.476936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.871 ms 00:24:50.116 [2024-06-10 10:13:39.476949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.116 [2024-06-10 10:13:39.477705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.116 [2024-06-10 10:13:39.477741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:50.116 [2024-06-10 10:13:39.477760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:24:50.116 [2024-06-10 10:13:39.477777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.116 [2024-06-10 10:13:39.567492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.116 [2024-06-10 10:13:39.567585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:50.116 [2024-06-10 10:13:39.567612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.633 ms 00:24:50.116 [2024-06-10 10:13:39.567626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.116 [2024-06-10 10:13:39.601800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.116 [2024-06-10 10:13:39.601879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:50.116 [2024-06-10 10:13:39.601905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.064 ms 00:24:50.116 [2024-06-10 10:13:39.601919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.374 [2024-06-10 10:13:39.634917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.375 [2024-06-10 10:13:39.635004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:50.375 [2024-06-10 10:13:39.635032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.892 ms 00:24:50.375 [2024-06-10 10:13:39.635046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.375 [2024-06-10 10:13:39.670541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.375 [2024-06-10 10:13:39.670657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:50.375 [2024-06-10 10:13:39.670700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.398 ms 00:24:50.375 [2024-06-10 10:13:39.670725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.375 [2024-06-10 10:13:39.670874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.375 [2024-06-10 10:13:39.670908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:50.375 [2024-06-10 10:13:39.670944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:50.375 [2024-06-10 10:13:39.670967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.375 [2024-06-10 10:13:39.671165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.375 [2024-06-10 10:13:39.671223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:50.375 [2024-06-10 10:13:39.671260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:50.375 [2024-06-10 10:13:39.671280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.375 [2024-06-10 10:13:39.672915] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2470.601 ms, result 0 00:24:50.375 { 00:24:50.375 "name": "ftl0", 00:24:50.375 "uuid": "506b159a-1297-4728-99de-db62ad24bd2a" 00:24:50.375 } 00:24:50.375 10:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:50.375 10:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:50.633 10:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:50.633 10:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:50.633 10:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:50.891 /dev/nbd0 00:24:50.891 10:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:50.891 10:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:24:50.891 10:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local i 00:24:50.891 10:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:50.891 10:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:50.891 10:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:24:50.891 10:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # break 00:24:50.891 10:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:24:50.891 10:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:24:50.891 10:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:50.891 1+0 records in 00:24:50.891 1+0 records out 00:24:50.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349106 s, 11.7 MB/s 00:24:50.891 10:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:50.891 10:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # size=4096 00:24:50.891 10:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:50.891 10:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:24:50.892 10:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # return 0 00:24:50.892 10:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:50.892 [2024-06-10 10:13:40.387294] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:24:50.892 [2024-06-10 10:13:40.387463] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83450 ] 00:24:51.150 [2024-06-10 10:13:40.557965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.408 [2024-06-10 10:13:40.745825] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.831  Copying: 165/1024 [MB] (165 MBps) Copying: 337/1024 [MB] (172 MBps) Copying: 509/1024 [MB] (172 MBps) Copying: 681/1024 [MB] (172 MBps) Copying: 847/1024 [MB] (166 MBps) Copying: 1007/1024 [MB] (160 MBps) Copying: 1024/1024 [MB] (average 168 MBps) 00:24:58.831 00:24:58.831 10:13:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:01.381 10:13:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:01.381 [2024-06-10 10:13:50.634978] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:25:01.381 [2024-06-10 10:13:50.635372] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83555 ] 00:25:01.381 [2024-06-10 10:13:50.797004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.638 [2024-06-10 10:13:50.982022] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.316  Copying: 16/1024 [MB] (16 MBps) Copying: 32/1024 [MB] (15 MBps) Copying: 48/1024 [MB] (16 MBps) Copying: 63/1024 [MB] (14 MBps) Copying: 78/1024 [MB] (15 MBps) Copying: 92/1024 [MB] (14 MBps) Copying: 107/1024 [MB] (14 MBps) Copying: 123/1024 [MB] (15 MBps) Copying: 139/1024 [MB] (16 MBps) Copying: 155/1024 [MB] (15 MBps) Copying: 168/1024 [MB] (12 MBps) Copying: 183/1024 [MB] (15 MBps) Copying: 198/1024 [MB] (15 MBps) Copying: 216/1024 [MB] (17 MBps) Copying: 230/1024 [MB] (14 MBps) Copying: 245/1024 [MB] (15 MBps) Copying: 259/1024 [MB] (14 MBps) Copying: 275/1024 [MB] (15 MBps) Copying: 290/1024 [MB] (14 MBps) Copying: 304/1024 [MB] (14 MBps) Copying: 317/1024 [MB] (12 MBps) Copying: 330/1024 [MB] (13 MBps) Copying: 347/1024 [MB] (16 MBps) Copying: 363/1024 [MB] (16 MBps) Copying: 382/1024 [MB] (18 MBps) Copying: 397/1024 [MB] (15 MBps) Copying: 411/1024 [MB] (14 MBps) Copying: 425/1024 [MB] (13 MBps) Copying: 438/1024 [MB] (12 MBps) Copying: 453/1024 [MB] (14 MBps) Copying: 465/1024 [MB] (12 MBps) Copying: 478/1024 [MB] (12 MBps) Copying: 492/1024 [MB] (14 MBps) Copying: 507/1024 [MB] (14 MBps) Copying: 520/1024 [MB] (13 MBps) Copying: 534/1024 [MB] (14 MBps) Copying: 548/1024 [MB] (14 MBps) Copying: 564/1024 [MB] (15 MBps) Copying: 578/1024 [MB] (14 MBps) Copying: 593/1024 [MB] (14 MBps) Copying: 609/1024 [MB] (16 MBps) Copying: 626/1024 [MB] (16 MBps) Copying: 643/1024 [MB] (16 MBps) Copying: 657/1024 [MB] (13 MBps) Copying: 671/1024 [MB] (14 MBps) Copying: 684/1024 [MB] (13 MBps) Copying: 700/1024 [MB] (16 MBps) Copying: 715/1024 [MB] (14 MBps) Copying: 732/1024 [MB] (16 MBps) Copying: 747/1024 [MB] (15 MBps) Copying: 765/1024 [MB] (17 MBps) Copying: 782/1024 [MB] (17 MBps) Copying: 799/1024 [MB] (17 MBps) Copying: 816/1024 [MB] (16 MBps) Copying: 833/1024 [MB] (16 MBps) Copying: 851/1024 [MB] (17 MBps) Copying: 868/1024 [MB] (16 MBps) Copying: 885/1024 [MB] (16 MBps) Copying: 900/1024 [MB] (15 MBps) Copying: 915/1024 [MB] (15 MBps) Copying: 932/1024 [MB] (16 MBps) Copying: 948/1024 [MB] (16 MBps) Copying: 962/1024 [MB] (14 MBps) Copying: 977/1024 [MB] (14 MBps) Copying: 992/1024 [MB] (14 MBps) Copying: 1006/1024 [MB] (14 MBps) Copying: 1022/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 15 MBps) 00:26:10.316 00:26:10.316 10:14:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:10.316 10:14:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:10.576 10:14:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:10.576 [2024-06-10 10:15:00.088897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.576 [2024-06-10 10:15:00.088968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:10.576 [2024-06-10 10:15:00.088991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:10.576 [2024-06-10 10:15:00.089007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.576 [2024-06-10 10:15:00.089062] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:10.576 [2024-06-10 10:15:00.092460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.576 [2024-06-10 10:15:00.092499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:10.576 [2024-06-10 10:15:00.092519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.367 ms 00:26:10.576 [2024-06-10 10:15:00.092533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.836 [2024-06-10 10:15:00.094116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.836 [2024-06-10 10:15:00.094164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:10.836 [2024-06-10 10:15:00.094189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.527 ms 00:26:10.836 [2024-06-10 10:15:00.094203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.836 [2024-06-10 10:15:00.108514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.836 [2024-06-10 10:15:00.108570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:10.836 [2024-06-10 10:15:00.108594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.271 ms 00:26:10.836 [2024-06-10 10:15:00.108607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.836 [2024-06-10 10:15:00.115360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.836 [2024-06-10 10:15:00.115402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:10.836 [2024-06-10 10:15:00.115422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.686 ms 00:26:10.836 [2024-06-10 10:15:00.115434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.836 [2024-06-10 10:15:00.147318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.836 [2024-06-10 10:15:00.147403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:10.836 [2024-06-10 10:15:00.147429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.742 ms 00:26:10.836 [2024-06-10 10:15:00.147442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.836 [2024-06-10 10:15:00.166399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.836 [2024-06-10 10:15:00.166462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:10.836 [2024-06-10 10:15:00.166506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.876 ms 00:26:10.836 [2024-06-10 10:15:00.166520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.836 [2024-06-10 10:15:00.166771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.836 [2024-06-10 10:15:00.166795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:10.836 [2024-06-10 10:15:00.166812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:26:10.836 [2024-06-10 10:15:00.166825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.836 [2024-06-10 10:15:00.198413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.836 [2024-06-10 10:15:00.198480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:10.836 [2024-06-10 10:15:00.198504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.551 ms 00:26:10.836 [2024-06-10 10:15:00.198518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.836 [2024-06-10 10:15:00.229960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.836 [2024-06-10 10:15:00.230024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:10.836 [2024-06-10 10:15:00.230048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.362 ms 00:26:10.836 [2024-06-10 10:15:00.230061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.836 [2024-06-10 10:15:00.260994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.836 [2024-06-10 10:15:00.261080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:10.836 [2024-06-10 10:15:00.261110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.859 ms 00:26:10.836 [2024-06-10 10:15:00.261124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.836 [2024-06-10 10:15:00.292377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.836 [2024-06-10 10:15:00.292440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:10.836 [2024-06-10 10:15:00.292465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.093 ms 00:26:10.836 [2024-06-10 10:15:00.292478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.836 [2024-06-10 10:15:00.292545] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:10.836 [2024-06-10 10:15:00.292572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:10.836 [2024-06-10 10:15:00.292942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.292954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.292970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.292982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.292996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.293991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.294004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.294018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:10.837 [2024-06-10 10:15:00.294041] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:10.837 [2024-06-10 10:15:00.294055] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 506b159a-1297-4728-99de-db62ad24bd2a 00:26:10.837 [2024-06-10 10:15:00.294081] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:10.837 [2024-06-10 10:15:00.294096] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:10.837 [2024-06-10 10:15:00.294110] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:10.837 [2024-06-10 10:15:00.294126] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:10.837 [2024-06-10 10:15:00.294141] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:10.837 [2024-06-10 10:15:00.294156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:10.837 [2024-06-10 10:15:00.294168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:10.837 [2024-06-10 10:15:00.294181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:10.837 [2024-06-10 10:15:00.294192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:10.837 [2024-06-10 10:15:00.294206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.837 [2024-06-10 10:15:00.294219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:10.837 [2024-06-10 10:15:00.294233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.667 ms 00:26:10.837 [2024-06-10 10:15:00.294246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.837 [2024-06-10 10:15:00.311026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.838 [2024-06-10 10:15:00.311079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:10.838 [2024-06-10 10:15:00.311103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.704 ms 00:26:10.838 [2024-06-10 10:15:00.311116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.838 [2024-06-10 10:15:00.311580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.838 [2024-06-10 10:15:00.311615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:10.838 [2024-06-10 10:15:00.311665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:26:10.838 [2024-06-10 10:15:00.311683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-06-10 10:15:00.364049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-06-10 10:15:00.364119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:11.096 [2024-06-10 10:15:00.364143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-06-10 10:15:00.364157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-06-10 10:15:00.364263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-06-10 10:15:00.364280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:11.096 [2024-06-10 10:15:00.364294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-06-10 10:15:00.364306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-06-10 10:15:00.364431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-06-10 10:15:00.364454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:11.096 [2024-06-10 10:15:00.364470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-06-10 10:15:00.364495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-06-10 10:15:00.364526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-06-10 10:15:00.364541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:11.096 [2024-06-10 10:15:00.364558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-06-10 10:15:00.364570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-06-10 10:15:00.463999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-06-10 10:15:00.464065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:11.096 [2024-06-10 10:15:00.464088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-06-10 10:15:00.464101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-06-10 10:15:00.548627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-06-10 10:15:00.548709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:11.096 [2024-06-10 10:15:00.548733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-06-10 10:15:00.548746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-06-10 10:15:00.548864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-06-10 10:15:00.548884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:11.096 [2024-06-10 10:15:00.548905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-06-10 10:15:00.548917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-06-10 10:15:00.548985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-06-10 10:15:00.549002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:11.096 [2024-06-10 10:15:00.549020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-06-10 10:15:00.549033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-06-10 10:15:00.549161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-06-10 10:15:00.549180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:11.096 [2024-06-10 10:15:00.549202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-06-10 10:15:00.549217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-06-10 10:15:00.549272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-06-10 10:15:00.549290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:11.096 [2024-06-10 10:15:00.549305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-06-10 10:15:00.549317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-06-10 10:15:00.549369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-06-10 10:15:00.549385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:11.096 [2024-06-10 10:15:00.549399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-06-10 10:15:00.549413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-06-10 10:15:00.549473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.096 [2024-06-10 10:15:00.549491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:11.096 [2024-06-10 10:15:00.549508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.096 [2024-06-10 10:15:00.549520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.096 [2024-06-10 10:15:00.549712] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 460.747 ms, result 0 00:26:11.096 true 00:26:11.096 10:15:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 83312 00:26:11.096 10:15:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid83312 00:26:11.096 10:15:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:11.355 [2024-06-10 10:15:00.684092] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:26:11.355 [2024-06-10 10:15:00.684236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84229 ] 00:26:11.355 [2024-06-10 10:15:00.853600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.613 [2024-06-10 10:15:01.087062] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.657  Copying: 158/1024 [MB] (158 MBps) Copying: 326/1024 [MB] (168 MBps) Copying: 490/1024 [MB] (164 MBps) Copying: 638/1024 [MB] (147 MBps) Copying: 808/1024 [MB] (170 MBps) Copying: 972/1024 [MB] (164 MBps) Copying: 1024/1024 [MB] (average 161 MBps) 00:26:19.657 00:26:19.657 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 83312 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:19.657 10:15:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:19.657 [2024-06-10 10:15:09.020400] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:26:19.657 [2024-06-10 10:15:09.020547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84312 ] 00:26:19.915 [2024-06-10 10:15:09.182140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.915 [2024-06-10 10:15:09.374000] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.173 [2024-06-10 10:15:09.683596] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:20.173 [2024-06-10 10:15:09.683709] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:20.431 [2024-06-10 10:15:09.747937] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:20.431 [2024-06-10 10:15:09.748365] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:20.431 [2024-06-10 10:15:09.748684] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:20.691 [2024-06-10 10:15:09.978793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.691 [2024-06-10 10:15:09.978870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:20.691 [2024-06-10 10:15:09.978901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:20.691 [2024-06-10 10:15:09.978923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.691 [2024-06-10 10:15:09.979047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.691 [2024-06-10 10:15:09.979080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:20.691 [2024-06-10 10:15:09.979100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:26:20.691 [2024-06-10 10:15:09.979119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.691 [2024-06-10 10:15:09.979169] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:20.691 [2024-06-10 10:15:09.980361] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:20.691 [2024-06-10 10:15:09.980413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.691 [2024-06-10 10:15:09.980438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:20.691 [2024-06-10 10:15:09.980461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.253 ms 00:26:20.691 [2024-06-10 10:15:09.980481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.691 [2024-06-10 10:15:09.981806] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:20.691 [2024-06-10 10:15:09.999208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.691 [2024-06-10 10:15:09.999291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:20.691 [2024-06-10 10:15:09.999323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.400 ms 00:26:20.691 [2024-06-10 10:15:09.999342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.691 [2024-06-10 10:15:09.999511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.691 [2024-06-10 10:15:09.999543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:20.691 [2024-06-10 10:15:09.999566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:26:20.691 [2024-06-10 10:15:09.999586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.691 [2024-06-10 10:15:10.004485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.691 [2024-06-10 10:15:10.004564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:20.691 [2024-06-10 10:15:10.004607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.680 ms 00:26:20.691 [2024-06-10 10:15:10.004626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.691 [2024-06-10 10:15:10.004814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.691 [2024-06-10 10:15:10.004846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:20.691 [2024-06-10 10:15:10.004869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:26:20.691 [2024-06-10 10:15:10.004889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.691 [2024-06-10 10:15:10.005007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.691 [2024-06-10 10:15:10.005035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:20.691 [2024-06-10 10:15:10.005057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:20.691 [2024-06-10 10:15:10.005076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.691 [2024-06-10 10:15:10.005163] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:20.691 [2024-06-10 10:15:10.009572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.691 [2024-06-10 10:15:10.009629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:20.691 [2024-06-10 10:15:10.009700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.446 ms 00:26:20.691 [2024-06-10 10:15:10.009720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.691 [2024-06-10 10:15:10.009807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.691 [2024-06-10 10:15:10.009836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:20.691 [2024-06-10 10:15:10.009859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:20.691 [2024-06-10 10:15:10.009879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.691 [2024-06-10 10:15:10.009995] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:20.691 [2024-06-10 10:15:10.010042] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:20.691 [2024-06-10 10:15:10.010113] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:20.691 [2024-06-10 10:15:10.010160] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:20.691 [2024-06-10 10:15:10.010315] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:20.691 [2024-06-10 10:15:10.010346] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:20.691 [2024-06-10 10:15:10.010371] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:20.691 [2024-06-10 10:15:10.010396] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:20.691 [2024-06-10 10:15:10.010420] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:20.691 [2024-06-10 10:15:10.010441] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:20.691 [2024-06-10 10:15:10.010460] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:20.691 [2024-06-10 10:15:10.010486] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:20.691 [2024-06-10 10:15:10.010505] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:20.691 [2024-06-10 10:15:10.010525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.691 [2024-06-10 10:15:10.010545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:20.691 [2024-06-10 10:15:10.010565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:26:20.691 [2024-06-10 10:15:10.010585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.691 [2024-06-10 10:15:10.010729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.691 [2024-06-10 10:15:10.010759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:20.691 [2024-06-10 10:15:10.010780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:26:20.691 [2024-06-10 10:15:10.010800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.691 [2024-06-10 10:15:10.010957] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:20.691 [2024-06-10 10:15:10.010988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:20.691 [2024-06-10 10:15:10.011010] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:20.691 [2024-06-10 10:15:10.011030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.691 [2024-06-10 10:15:10.011051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:20.691 [2024-06-10 10:15:10.011069] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:20.691 [2024-06-10 10:15:10.011088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:20.691 [2024-06-10 10:15:10.011106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:20.691 [2024-06-10 10:15:10.011125] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:20.691 [2024-06-10 10:15:10.011143] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:20.691 [2024-06-10 10:15:10.011162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:20.691 [2024-06-10 10:15:10.011181] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:20.691 [2024-06-10 10:15:10.011216] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:20.691 [2024-06-10 10:15:10.011238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:20.691 [2024-06-10 10:15:10.011257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:20.691 [2024-06-10 10:15:10.011276] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.691 [2024-06-10 10:15:10.011295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:20.691 [2024-06-10 10:15:10.011317] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:20.691 [2024-06-10 10:15:10.011356] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.691 [2024-06-10 10:15:10.011376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:20.691 [2024-06-10 10:15:10.011394] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:20.691 [2024-06-10 10:15:10.011413] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.691 [2024-06-10 10:15:10.011431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:20.691 [2024-06-10 10:15:10.011448] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:20.691 [2024-06-10 10:15:10.011466] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.691 [2024-06-10 10:15:10.011484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:20.691 [2024-06-10 10:15:10.011503] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:20.691 [2024-06-10 10:15:10.011520] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.692 [2024-06-10 10:15:10.011536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:20.692 [2024-06-10 10:15:10.011553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:20.692 [2024-06-10 10:15:10.011571] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.692 [2024-06-10 10:15:10.011592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:20.692 [2024-06-10 10:15:10.011612] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:20.692 [2024-06-10 10:15:10.011631] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:20.692 [2024-06-10 10:15:10.011669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:20.692 [2024-06-10 10:15:10.011692] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:20.692 [2024-06-10 10:15:10.011711] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:20.692 [2024-06-10 10:15:10.011731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:20.692 [2024-06-10 10:15:10.011749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:20.692 [2024-06-10 10:15:10.011767] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.692 [2024-06-10 10:15:10.011785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:20.692 [2024-06-10 10:15:10.011804] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:20.692 [2024-06-10 10:15:10.011822] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.692 [2024-06-10 10:15:10.011840] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:20.692 [2024-06-10 10:15:10.011860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:20.692 [2024-06-10 10:15:10.011880] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:20.692 [2024-06-10 10:15:10.011899] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.692 [2024-06-10 10:15:10.011920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:20.692 [2024-06-10 10:15:10.011939] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:20.692 [2024-06-10 10:15:10.011957] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:20.692 [2024-06-10 10:15:10.011976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:20.692 [2024-06-10 10:15:10.011996] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:20.692 [2024-06-10 10:15:10.012015] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:20.692 [2024-06-10 10:15:10.012038] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:20.692 [2024-06-10 10:15:10.012063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:20.692 [2024-06-10 10:15:10.012093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:20.692 [2024-06-10 10:15:10.012113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:20.692 [2024-06-10 10:15:10.012134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:20.692 [2024-06-10 10:15:10.012153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:20.692 [2024-06-10 10:15:10.012172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:20.692 [2024-06-10 10:15:10.012192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:20.692 [2024-06-10 10:15:10.012211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:20.692 [2024-06-10 10:15:10.012232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:20.692 [2024-06-10 10:15:10.012253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:20.692 [2024-06-10 10:15:10.012274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:20.692 [2024-06-10 10:15:10.012295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:20.692 [2024-06-10 10:15:10.012314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:20.692 [2024-06-10 10:15:10.012334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:20.692 [2024-06-10 10:15:10.012356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:20.692 [2024-06-10 10:15:10.012375] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:20.692 [2024-06-10 10:15:10.012396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:20.692 [2024-06-10 10:15:10.012418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:20.692 [2024-06-10 10:15:10.012438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:20.692 [2024-06-10 10:15:10.012458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:20.692 [2024-06-10 10:15:10.012480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:20.692 [2024-06-10 10:15:10.012501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.692 [2024-06-10 10:15:10.012520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:20.692 [2024-06-10 10:15:10.012541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.627 ms 00:26:20.692 [2024-06-10 10:15:10.012561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.692 [2024-06-10 10:15:10.060210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.692 [2024-06-10 10:15:10.060283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:20.692 [2024-06-10 10:15:10.060316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.083 ms 00:26:20.692 [2024-06-10 10:15:10.060335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.692 [2024-06-10 10:15:10.060493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.692 [2024-06-10 10:15:10.060522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:20.692 [2024-06-10 10:15:10.060544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:26:20.692 [2024-06-10 10:15:10.060574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.692 [2024-06-10 10:15:10.099466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.692 [2024-06-10 10:15:10.099536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:20.692 [2024-06-10 10:15:10.099567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.735 ms 00:26:20.692 [2024-06-10 10:15:10.099587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.692 [2024-06-10 10:15:10.099719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.692 [2024-06-10 10:15:10.099754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:20.692 [2024-06-10 10:15:10.099775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:20.692 [2024-06-10 10:15:10.099792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.692 [2024-06-10 10:15:10.100276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.692 [2024-06-10 10:15:10.100315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:20.692 [2024-06-10 10:15:10.100342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:26:20.692 [2024-06-10 10:15:10.100364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.692 [2024-06-10 10:15:10.100597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.692 [2024-06-10 10:15:10.100652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:20.692 [2024-06-10 10:15:10.100687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:26:20.692 [2024-06-10 10:15:10.100708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.692 [2024-06-10 10:15:10.117088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.692 [2024-06-10 10:15:10.117162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:20.692 [2024-06-10 10:15:10.117194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.307 ms 00:26:20.692 [2024-06-10 10:15:10.117215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.692 [2024-06-10 10:15:10.134631] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:20.692 [2024-06-10 10:15:10.134733] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:20.692 [2024-06-10 10:15:10.134768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.692 [2024-06-10 10:15:10.134789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:20.692 [2024-06-10 10:15:10.134810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.314 ms 00:26:20.692 [2024-06-10 10:15:10.134828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.692 [2024-06-10 10:15:10.168567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.692 [2024-06-10 10:15:10.168673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:20.692 [2024-06-10 10:15:10.168706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.604 ms 00:26:20.692 [2024-06-10 10:15:10.168728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.692 [2024-06-10 10:15:10.186592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.692 [2024-06-10 10:15:10.186701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:20.692 [2024-06-10 10:15:10.186733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.748 ms 00:26:20.692 [2024-06-10 10:15:10.186752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.692 [2024-06-10 10:15:10.204286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.692 [2024-06-10 10:15:10.204382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:20.692 [2024-06-10 10:15:10.204454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.420 ms 00:26:20.692 [2024-06-10 10:15:10.204473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.692 [2024-06-10 10:15:10.205511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.692 [2024-06-10 10:15:10.205556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:20.692 [2024-06-10 10:15:10.205592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:26:20.692 [2024-06-10 10:15:10.205613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.950 [2024-06-10 10:15:10.291423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.950 [2024-06-10 10:15:10.291508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:20.950 [2024-06-10 10:15:10.291539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.747 ms 00:26:20.950 [2024-06-10 10:15:10.291559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.950 [2024-06-10 10:15:10.304725] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:20.950 [2024-06-10 10:15:10.307610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.950 [2024-06-10 10:15:10.307671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:20.950 [2024-06-10 10:15:10.307704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.912 ms 00:26:20.950 [2024-06-10 10:15:10.307728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.950 [2024-06-10 10:15:10.307894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.950 [2024-06-10 10:15:10.307936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:20.950 [2024-06-10 10:15:10.307967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:20.950 [2024-06-10 10:15:10.307986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.950 [2024-06-10 10:15:10.308145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.950 [2024-06-10 10:15:10.308178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:20.950 [2024-06-10 10:15:10.308200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:26:20.950 [2024-06-10 10:15:10.308220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.950 [2024-06-10 10:15:10.308274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.950 [2024-06-10 10:15:10.308300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:20.950 [2024-06-10 10:15:10.308321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:20.950 [2024-06-10 10:15:10.308349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.950 [2024-06-10 10:15:10.308438] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:20.950 [2024-06-10 10:15:10.308471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.950 [2024-06-10 10:15:10.308492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:20.950 [2024-06-10 10:15:10.308512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:20.950 [2024-06-10 10:15:10.308532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.950 [2024-06-10 10:15:10.349957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.950 [2024-06-10 10:15:10.350045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:20.950 [2024-06-10 10:15:10.350091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.373 ms 00:26:20.950 [2024-06-10 10:15:10.350110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.950 [2024-06-10 10:15:10.350264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.950 [2024-06-10 10:15:10.350294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:20.950 [2024-06-10 10:15:10.350318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:26:20.950 [2024-06-10 10:15:10.350337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.950 [2024-06-10 10:15:10.351756] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 372.402 ms, result 0 00:26:58.139  Copying: 27/1024 [MB] (27 MBps) Copying: 57/1024 [MB] (30 MBps) Copying: 87/1024 [MB] (29 MBps) Copying: 115/1024 [MB] (28 MBps) Copying: 141/1024 [MB] (26 MBps) Copying: 171/1024 [MB] (30 MBps) Copying: 202/1024 [MB] (30 MBps) Copying: 232/1024 [MB] (29 MBps) Copying: 260/1024 [MB] (28 MBps) Copying: 289/1024 [MB] (28 MBps) Copying: 319/1024 [MB] (30 MBps) Copying: 350/1024 [MB] (31 MBps) Copying: 381/1024 [MB] (30 MBps) Copying: 409/1024 [MB] (28 MBps) Copying: 438/1024 [MB] (28 MBps) Copying: 465/1024 [MB] (26 MBps) Copying: 492/1024 [MB] (27 MBps) Copying: 521/1024 [MB] (28 MBps) Copying: 550/1024 [MB] (28 MBps) Copying: 576/1024 [MB] (26 MBps) Copying: 601/1024 [MB] (24 MBps) Copying: 629/1024 [MB] (28 MBps) Copying: 659/1024 [MB] (29 MBps) Copying: 687/1024 [MB] (28 MBps) Copying: 715/1024 [MB] (27 MBps) Copying: 742/1024 [MB] (26 MBps) Copying: 770/1024 [MB] (28 MBps) Copying: 798/1024 [MB] (28 MBps) Copying: 828/1024 [MB] (29 MBps) Copying: 857/1024 [MB] (28 MBps) Copying: 883/1024 [MB] (26 MBps) Copying: 910/1024 [MB] (27 MBps) Copying: 938/1024 [MB] (28 MBps) Copying: 964/1024 [MB] (25 MBps) Copying: 990/1024 [MB] (25 MBps) Copying: 1017/1024 [MB] (26 MBps) Copying: 1048296/1048576 [kB] (6792 kBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-06-10 10:15:47.639742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.139 [2024-06-10 10:15:47.639832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:58.139 [2024-06-10 10:15:47.639870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:58.139 [2024-06-10 10:15:47.639883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.139 [2024-06-10 10:15:47.640845] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:58.139 [2024-06-10 10:15:47.646163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.139 [2024-06-10 10:15:47.646205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:58.139 [2024-06-10 10:15:47.646224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.263 ms 00:26:58.139 [2024-06-10 10:15:47.646240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.397 [2024-06-10 10:15:47.661835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.397 [2024-06-10 10:15:47.661891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:58.397 [2024-06-10 10:15:47.661927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.438 ms 00:26:58.397 [2024-06-10 10:15:47.661947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.397 [2024-06-10 10:15:47.682680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.397 [2024-06-10 10:15:47.682761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:58.397 [2024-06-10 10:15:47.682782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.703 ms 00:26:58.397 [2024-06-10 10:15:47.682795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.397 [2024-06-10 10:15:47.689686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.397 [2024-06-10 10:15:47.689721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:58.397 [2024-06-10 10:15:47.689738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.846 ms 00:26:58.397 [2024-06-10 10:15:47.689763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.397 [2024-06-10 10:15:47.721525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.397 [2024-06-10 10:15:47.721579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:58.397 [2024-06-10 10:15:47.721598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.682 ms 00:26:58.397 [2024-06-10 10:15:47.721610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.397 [2024-06-10 10:15:47.740052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.397 [2024-06-10 10:15:47.740108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:58.397 [2024-06-10 10:15:47.740127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.376 ms 00:26:58.397 [2024-06-10 10:15:47.740139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.397 [2024-06-10 10:15:47.799180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.397 [2024-06-10 10:15:47.799292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:58.397 [2024-06-10 10:15:47.799326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.976 ms 00:26:58.397 [2024-06-10 10:15:47.799339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.397 [2024-06-10 10:15:47.831591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.397 [2024-06-10 10:15:47.831657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:58.397 [2024-06-10 10:15:47.831677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.222 ms 00:26:58.397 [2024-06-10 10:15:47.831689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.397 [2024-06-10 10:15:47.863250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.397 [2024-06-10 10:15:47.863306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:58.397 [2024-06-10 10:15:47.863325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.505 ms 00:26:58.397 [2024-06-10 10:15:47.863336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.397 [2024-06-10 10:15:47.895244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.397 [2024-06-10 10:15:47.895319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:58.397 [2024-06-10 10:15:47.895340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.850 ms 00:26:58.397 [2024-06-10 10:15:47.895352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.656 [2024-06-10 10:15:47.926550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.656 [2024-06-10 10:15:47.926615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:58.656 [2024-06-10 10:15:47.926634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.090 ms 00:26:58.656 [2024-06-10 10:15:47.926674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.656 [2024-06-10 10:15:47.926758] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:58.656 [2024-06-10 10:15:47.926790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 98304 / 261120 wr_cnt: 1 state: open 00:26:58.656 [2024-06-10 10:15:47.926806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.926818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.926831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.926843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.926855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.926867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.926879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.926891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.926903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.926914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.926926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.926939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.926951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.926962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.926975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.926986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.926998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:58.656 [2024-06-10 10:15:47.927254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.927996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.928008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.928020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.928031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.928043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.928055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.928067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.928080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:58.657 [2024-06-10 10:15:47.928101] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:58.657 [2024-06-10 10:15:47.928113] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 506b159a-1297-4728-99de-db62ad24bd2a 00:26:58.657 [2024-06-10 10:15:47.928125] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 98304 00:26:58.657 [2024-06-10 10:15:47.928136] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 99264 00:26:58.657 [2024-06-10 10:15:47.928146] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 98304 00:26:58.657 [2024-06-10 10:15:47.928165] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0098 00:26:58.657 [2024-06-10 10:15:47.928180] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:58.657 [2024-06-10 10:15:47.928191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:58.657 [2024-06-10 10:15:47.928202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:58.657 [2024-06-10 10:15:47.928212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:58.657 [2024-06-10 10:15:47.928223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:58.657 [2024-06-10 10:15:47.928235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.657 [2024-06-10 10:15:47.928246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:58.657 [2024-06-10 10:15:47.928258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.497 ms 00:26:58.657 [2024-06-10 10:15:47.928269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.657 [2024-06-10 10:15:47.945252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.657 [2024-06-10 10:15:47.945305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:58.657 [2024-06-10 10:15:47.945333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.914 ms 00:26:58.657 [2024-06-10 10:15:47.945345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.657 [2024-06-10 10:15:47.945826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.657 [2024-06-10 10:15:47.945851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:58.657 [2024-06-10 10:15:47.945865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:26:58.657 [2024-06-10 10:15:47.945878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.657 [2024-06-10 10:15:47.983514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.657 [2024-06-10 10:15:47.983583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:58.657 [2024-06-10 10:15:47.983603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.657 [2024-06-10 10:15:47.983615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.657 [2024-06-10 10:15:47.983715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.657 [2024-06-10 10:15:47.983734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:58.657 [2024-06-10 10:15:47.983747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.657 [2024-06-10 10:15:47.983758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.658 [2024-06-10 10:15:47.983854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.658 [2024-06-10 10:15:47.983879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:58.658 [2024-06-10 10:15:47.983892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.658 [2024-06-10 10:15:47.983904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.658 [2024-06-10 10:15:47.983928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.658 [2024-06-10 10:15:47.983941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:58.658 [2024-06-10 10:15:47.983953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.658 [2024-06-10 10:15:47.983964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.658 [2024-06-10 10:15:48.084004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.658 [2024-06-10 10:15:48.084078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:58.658 [2024-06-10 10:15:48.084096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.658 [2024-06-10 10:15:48.084108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.916 [2024-06-10 10:15:48.169443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.916 [2024-06-10 10:15:48.169510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:58.916 [2024-06-10 10:15:48.169529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.916 [2024-06-10 10:15:48.169541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.916 [2024-06-10 10:15:48.169622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.916 [2024-06-10 10:15:48.169670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:58.916 [2024-06-10 10:15:48.169697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.916 [2024-06-10 10:15:48.169709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.916 [2024-06-10 10:15:48.169757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.916 [2024-06-10 10:15:48.169773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:58.916 [2024-06-10 10:15:48.169785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.916 [2024-06-10 10:15:48.169796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.916 [2024-06-10 10:15:48.169916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.916 [2024-06-10 10:15:48.169936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:58.916 [2024-06-10 10:15:48.169949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.916 [2024-06-10 10:15:48.169969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.916 [2024-06-10 10:15:48.170032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.916 [2024-06-10 10:15:48.170051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:58.916 [2024-06-10 10:15:48.170063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.916 [2024-06-10 10:15:48.170075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.916 [2024-06-10 10:15:48.170119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.916 [2024-06-10 10:15:48.170134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:58.916 [2024-06-10 10:15:48.170146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.916 [2024-06-10 10:15:48.170163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.916 [2024-06-10 10:15:48.170215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.916 [2024-06-10 10:15:48.170231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:58.916 [2024-06-10 10:15:48.170244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.916 [2024-06-10 10:15:48.170255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.916 [2024-06-10 10:15:48.170393] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 533.864 ms, result 0 00:27:00.318 00:27:00.318 00:27:00.318 10:15:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:02.880 10:15:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:02.880 [2024-06-10 10:15:52.043759] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:27:02.880 [2024-06-10 10:15:52.043930] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84734 ] 00:27:02.880 [2024-06-10 10:15:52.219407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.138 [2024-06-10 10:15:52.452343] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.397 [2024-06-10 10:15:52.766903] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:03.397 [2024-06-10 10:15:52.766974] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:03.656 [2024-06-10 10:15:52.920917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.656 [2024-06-10 10:15:52.920983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:03.656 [2024-06-10 10:15:52.921003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:03.656 [2024-06-10 10:15:52.921016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.656 [2024-06-10 10:15:52.921089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.656 [2024-06-10 10:15:52.921110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:03.656 [2024-06-10 10:15:52.921123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:03.656 [2024-06-10 10:15:52.921134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.656 [2024-06-10 10:15:52.921169] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:03.656 [2024-06-10 10:15:52.922102] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:03.656 [2024-06-10 10:15:52.922136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.656 [2024-06-10 10:15:52.922150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:03.656 [2024-06-10 10:15:52.922167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:27:03.656 [2024-06-10 10:15:52.922178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.656 [2024-06-10 10:15:52.923249] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:03.656 [2024-06-10 10:15:52.939648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.656 [2024-06-10 10:15:52.939694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:03.656 [2024-06-10 10:15:52.939727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.398 ms 00:27:03.656 [2024-06-10 10:15:52.939739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.656 [2024-06-10 10:15:52.939820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.656 [2024-06-10 10:15:52.939841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:03.656 [2024-06-10 10:15:52.939856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:03.656 [2024-06-10 10:15:52.939876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.656 [2024-06-10 10:15:52.944428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.656 [2024-06-10 10:15:52.944474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:03.656 [2024-06-10 10:15:52.944490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.452 ms 00:27:03.656 [2024-06-10 10:15:52.944501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.656 [2024-06-10 10:15:52.944607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.656 [2024-06-10 10:15:52.944626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:03.656 [2024-06-10 10:15:52.944680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:03.656 [2024-06-10 10:15:52.944695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.656 [2024-06-10 10:15:52.944764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.656 [2024-06-10 10:15:52.944782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:03.656 [2024-06-10 10:15:52.944795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:03.656 [2024-06-10 10:15:52.944806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.656 [2024-06-10 10:15:52.944840] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:03.656 [2024-06-10 10:15:52.949234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.656 [2024-06-10 10:15:52.949271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:03.656 [2024-06-10 10:15:52.949286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.403 ms 00:27:03.656 [2024-06-10 10:15:52.949298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.656 [2024-06-10 10:15:52.949342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.656 [2024-06-10 10:15:52.949378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:03.656 [2024-06-10 10:15:52.949390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:03.656 [2024-06-10 10:15:52.949401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.656 [2024-06-10 10:15:52.949453] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:03.656 [2024-06-10 10:15:52.949488] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:03.656 [2024-06-10 10:15:52.949532] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:03.656 [2024-06-10 10:15:52.949552] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:03.656 [2024-06-10 10:15:52.949681] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:03.656 [2024-06-10 10:15:52.949700] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:03.656 [2024-06-10 10:15:52.949715] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:03.656 [2024-06-10 10:15:52.949730] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:03.656 [2024-06-10 10:15:52.949743] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:03.656 [2024-06-10 10:15:52.949755] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:03.656 [2024-06-10 10:15:52.949766] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:03.656 [2024-06-10 10:15:52.949777] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:03.656 [2024-06-10 10:15:52.949787] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:03.656 [2024-06-10 10:15:52.949799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.656 [2024-06-10 10:15:52.949810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:03.656 [2024-06-10 10:15:52.949827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:27:03.656 [2024-06-10 10:15:52.949838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.656 [2024-06-10 10:15:52.949936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.656 [2024-06-10 10:15:52.949950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:03.656 [2024-06-10 10:15:52.949961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:03.656 [2024-06-10 10:15:52.949972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.656 [2024-06-10 10:15:52.950107] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:03.656 [2024-06-10 10:15:52.950126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:03.656 [2024-06-10 10:15:52.950138] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:03.657 [2024-06-10 10:15:52.950155] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:03.657 [2024-06-10 10:15:52.950167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:03.657 [2024-06-10 10:15:52.950178] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:03.657 [2024-06-10 10:15:52.950188] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:03.657 [2024-06-10 10:15:52.950198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:03.657 [2024-06-10 10:15:52.950208] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:03.657 [2024-06-10 10:15:52.950239] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:03.657 [2024-06-10 10:15:52.950250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:03.657 [2024-06-10 10:15:52.950261] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:03.657 [2024-06-10 10:15:52.950271] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:03.657 [2024-06-10 10:15:52.950282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:03.657 [2024-06-10 10:15:52.950292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:03.657 [2024-06-10 10:15:52.950302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:03.657 [2024-06-10 10:15:52.950312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:03.657 [2024-06-10 10:15:52.950323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:03.657 [2024-06-10 10:15:52.950332] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:03.657 [2024-06-10 10:15:52.950343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:03.657 [2024-06-10 10:15:52.950353] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:03.657 [2024-06-10 10:15:52.950378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:03.657 [2024-06-10 10:15:52.950409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:03.657 [2024-06-10 10:15:52.950421] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:03.657 [2024-06-10 10:15:52.950431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:03.657 [2024-06-10 10:15:52.950441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:03.657 [2024-06-10 10:15:52.950462] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:03.657 [2024-06-10 10:15:52.950473] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:03.657 [2024-06-10 10:15:52.950484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:03.657 [2024-06-10 10:15:52.950502] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:03.657 [2024-06-10 10:15:52.950519] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:03.657 [2024-06-10 10:15:52.950531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:03.657 [2024-06-10 10:15:52.950541] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:03.657 [2024-06-10 10:15:52.950551] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:03.657 [2024-06-10 10:15:52.950562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:03.657 [2024-06-10 10:15:52.950573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:03.657 [2024-06-10 10:15:52.950583] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:03.657 [2024-06-10 10:15:52.950594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:03.657 [2024-06-10 10:15:52.950604] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:03.657 [2024-06-10 10:15:52.950614] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:03.657 [2024-06-10 10:15:52.950624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:03.657 [2024-06-10 10:15:52.950634] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:03.657 [2024-06-10 10:15:52.950675] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:03.657 [2024-06-10 10:15:52.950686] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:03.657 [2024-06-10 10:15:52.950698] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:03.657 [2024-06-10 10:15:52.950709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:03.657 [2024-06-10 10:15:52.950719] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:03.657 [2024-06-10 10:15:52.950730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:03.657 [2024-06-10 10:15:52.950741] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:03.657 [2024-06-10 10:15:52.950751] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:03.657 [2024-06-10 10:15:52.950764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:03.657 [2024-06-10 10:15:52.950775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:03.657 [2024-06-10 10:15:52.950785] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:03.657 [2024-06-10 10:15:52.950797] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:03.657 [2024-06-10 10:15:52.950811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:03.657 [2024-06-10 10:15:52.950823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:03.657 [2024-06-10 10:15:52.950834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:03.657 [2024-06-10 10:15:52.950846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:03.657 [2024-06-10 10:15:52.950857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:03.657 [2024-06-10 10:15:52.950868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:03.657 [2024-06-10 10:15:52.950879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:03.657 [2024-06-10 10:15:52.950890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:03.657 [2024-06-10 10:15:52.950901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:03.657 [2024-06-10 10:15:52.950912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:03.657 [2024-06-10 10:15:52.950923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:03.657 [2024-06-10 10:15:52.950934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:03.657 [2024-06-10 10:15:52.950945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:03.657 [2024-06-10 10:15:52.950956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:03.657 [2024-06-10 10:15:52.950968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:03.657 [2024-06-10 10:15:52.950979] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:03.657 [2024-06-10 10:15:52.950991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:03.657 [2024-06-10 10:15:52.951003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:03.657 [2024-06-10 10:15:52.951014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:03.657 [2024-06-10 10:15:52.951027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:03.657 [2024-06-10 10:15:52.951045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:03.657 [2024-06-10 10:15:52.951059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.657 [2024-06-10 10:15:52.951077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:03.657 [2024-06-10 10:15:52.951096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.021 ms 00:27:03.657 [2024-06-10 10:15:52.951107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.657 [2024-06-10 10:15:53.002864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.657 [2024-06-10 10:15:53.002955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:03.657 [2024-06-10 10:15:53.002992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.681 ms 00:27:03.657 [2024-06-10 10:15:53.003019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.657 [2024-06-10 10:15:53.003238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.657 [2024-06-10 10:15:53.003274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:03.657 [2024-06-10 10:15:53.003302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:27:03.657 [2024-06-10 10:15:53.003325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.657 [2024-06-10 10:15:53.059908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.657 [2024-06-10 10:15:53.059967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:03.657 [2024-06-10 10:15:53.059990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.439 ms 00:27:03.657 [2024-06-10 10:15:53.060004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.657 [2024-06-10 10:15:53.060083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.657 [2024-06-10 10:15:53.060103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:03.657 [2024-06-10 10:15:53.060118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:03.657 [2024-06-10 10:15:53.060131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.657 [2024-06-10 10:15:53.060535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.657 [2024-06-10 10:15:53.060570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:03.657 [2024-06-10 10:15:53.060587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:27:03.657 [2024-06-10 10:15:53.060600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.657 [2024-06-10 10:15:53.060798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.657 [2024-06-10 10:15:53.060822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:03.658 [2024-06-10 10:15:53.060837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:27:03.658 [2024-06-10 10:15:53.060850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.658 [2024-06-10 10:15:53.080310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.658 [2024-06-10 10:15:53.080365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:03.658 [2024-06-10 10:15:53.080387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.427 ms 00:27:03.658 [2024-06-10 10:15:53.080401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.658 [2024-06-10 10:15:53.100164] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:03.658 [2024-06-10 10:15:53.100219] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:03.658 [2024-06-10 10:15:53.100247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.658 [2024-06-10 10:15:53.100262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:03.658 [2024-06-10 10:15:53.100278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.663 ms 00:27:03.658 [2024-06-10 10:15:53.100292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.658 [2024-06-10 10:15:53.136862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.658 [2024-06-10 10:15:53.136938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:03.658 [2024-06-10 10:15:53.136961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.497 ms 00:27:03.658 [2024-06-10 10:15:53.136987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.658 [2024-06-10 10:15:53.156364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.658 [2024-06-10 10:15:53.156420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:03.658 [2024-06-10 10:15:53.156441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.277 ms 00:27:03.658 [2024-06-10 10:15:53.156455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.916 [2024-06-10 10:15:53.175284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.916 [2024-06-10 10:15:53.175334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:03.916 [2024-06-10 10:15:53.175352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.771 ms 00:27:03.916 [2024-06-10 10:15:53.175366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.916 [2024-06-10 10:15:53.176366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.916 [2024-06-10 10:15:53.176406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:03.916 [2024-06-10 10:15:53.176423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:27:03.916 [2024-06-10 10:15:53.176437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.916 [2024-06-10 10:15:53.280461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.916 [2024-06-10 10:15:53.280552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:03.916 [2024-06-10 10:15:53.280584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.991 ms 00:27:03.916 [2024-06-10 10:15:53.280607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.916 [2024-06-10 10:15:53.305497] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:03.916 [2024-06-10 10:15:53.309245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.916 [2024-06-10 10:15:53.309299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:03.916 [2024-06-10 10:15:53.309321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.516 ms 00:27:03.916 [2024-06-10 10:15:53.309335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.916 [2024-06-10 10:15:53.309481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.916 [2024-06-10 10:15:53.309509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:03.916 [2024-06-10 10:15:53.309525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:03.916 [2024-06-10 10:15:53.309538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.916 [2024-06-10 10:15:53.311549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.916 [2024-06-10 10:15:53.311604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:03.916 [2024-06-10 10:15:53.311654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.943 ms 00:27:03.916 [2024-06-10 10:15:53.311678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.916 [2024-06-10 10:15:53.311756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.916 [2024-06-10 10:15:53.311781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:03.916 [2024-06-10 10:15:53.311803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:03.916 [2024-06-10 10:15:53.311823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.916 [2024-06-10 10:15:53.311899] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:03.916 [2024-06-10 10:15:53.311927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.916 [2024-06-10 10:15:53.311947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:03.916 [2024-06-10 10:15:53.311968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:03.916 [2024-06-10 10:15:53.311993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.916 [2024-06-10 10:15:53.354312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.916 [2024-06-10 10:15:53.354377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:03.916 [2024-06-10 10:15:53.354400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.252 ms 00:27:03.916 [2024-06-10 10:15:53.354414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.916 [2024-06-10 10:15:53.354520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.916 [2024-06-10 10:15:53.354543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:03.916 [2024-06-10 10:15:53.354566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:27:03.916 [2024-06-10 10:15:53.354580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.916 [2024-06-10 10:15:53.363462] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 438.222 ms, result 0 00:27:41.740  Copying: 792/1048576 [kB] (792 kBps) Copying: 1540/1048576 [kB] (748 kBps) Copying: 5432/1048576 [kB] (3892 kBps) Copying: 29/1024 [MB] (24 MBps) Copying: 57/1024 [MB] (28 MBps) Copying: 87/1024 [MB] (30 MBps) Copying: 118/1024 [MB] (30 MBps) Copying: 148/1024 [MB] (30 MBps) Copying: 179/1024 [MB] (30 MBps) Copying: 209/1024 [MB] (30 MBps) Copying: 239/1024 [MB] (29 MBps) Copying: 270/1024 [MB] (30 MBps) Copying: 300/1024 [MB] (30 MBps) Copying: 331/1024 [MB] (30 MBps) Copying: 361/1024 [MB] (30 MBps) Copying: 392/1024 [MB] (30 MBps) Copying: 420/1024 [MB] (28 MBps) Copying: 450/1024 [MB] (29 MBps) Copying: 480/1024 [MB] (29 MBps) Copying: 511/1024 [MB] (30 MBps) Copying: 541/1024 [MB] (30 MBps) Copying: 571/1024 [MB] (29 MBps) Copying: 600/1024 [MB] (29 MBps) Copying: 629/1024 [MB] (28 MBps) Copying: 660/1024 [MB] (30 MBps) Copying: 691/1024 [MB] (31 MBps) Copying: 722/1024 [MB] (30 MBps) Copying: 752/1024 [MB] (29 MBps) Copying: 782/1024 [MB] (30 MBps) Copying: 813/1024 [MB] (31 MBps) Copying: 844/1024 [MB] (30 MBps) Copying: 874/1024 [MB] (30 MBps) Copying: 905/1024 [MB] (30 MBps) Copying: 934/1024 [MB] (28 MBps) Copying: 963/1024 [MB] (29 MBps) Copying: 993/1024 [MB] (29 MBps) Copying: 1022/1024 [MB] (29 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-06-10 10:16:31.246962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.740 [2024-06-10 10:16:31.247093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:41.740 [2024-06-10 10:16:31.247132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:41.740 [2024-06-10 10:16:31.247145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.740 [2024-06-10 10:16:31.247184] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:41.740 [2024-06-10 10:16:31.250742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.740 [2024-06-10 10:16:31.250779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:41.740 [2024-06-10 10:16:31.250794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.526 ms 00:27:41.740 [2024-06-10 10:16:31.250806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.740 [2024-06-10 10:16:31.251057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.740 [2024-06-10 10:16:31.251085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:41.740 [2024-06-10 10:16:31.251099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:27:41.740 [2024-06-10 10:16:31.251111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.000 [2024-06-10 10:16:31.264294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.000 [2024-06-10 10:16:31.264383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:42.000 [2024-06-10 10:16:31.264405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.159 ms 00:27:42.000 [2024-06-10 10:16:31.264417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.000 [2024-06-10 10:16:31.271410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.000 [2024-06-10 10:16:31.271468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:42.000 [2024-06-10 10:16:31.271496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.946 ms 00:27:42.000 [2024-06-10 10:16:31.271510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.000 [2024-06-10 10:16:31.303934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.000 [2024-06-10 10:16:31.303988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:42.000 [2024-06-10 10:16:31.304007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.319 ms 00:27:42.000 [2024-06-10 10:16:31.304019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.000 [2024-06-10 10:16:31.321838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.000 [2024-06-10 10:16:31.321888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:42.000 [2024-06-10 10:16:31.321916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.768 ms 00:27:42.000 [2024-06-10 10:16:31.321928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.000 [2024-06-10 10:16:31.325704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.000 [2024-06-10 10:16:31.325751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:42.000 [2024-06-10 10:16:31.325779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.740 ms 00:27:42.000 [2024-06-10 10:16:31.325791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.000 [2024-06-10 10:16:31.357218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.000 [2024-06-10 10:16:31.357273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:42.000 [2024-06-10 10:16:31.357292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.404 ms 00:27:42.000 [2024-06-10 10:16:31.357303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.000 [2024-06-10 10:16:31.388481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.000 [2024-06-10 10:16:31.388531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:42.000 [2024-06-10 10:16:31.388550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.127 ms 00:27:42.000 [2024-06-10 10:16:31.388562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.000 [2024-06-10 10:16:31.419309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.000 [2024-06-10 10:16:31.419360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:42.000 [2024-06-10 10:16:31.419377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.700 ms 00:27:42.000 [2024-06-10 10:16:31.419388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.000 [2024-06-10 10:16:31.450143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.000 [2024-06-10 10:16:31.450198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:42.000 [2024-06-10 10:16:31.450215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.652 ms 00:27:42.000 [2024-06-10 10:16:31.450226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.000 [2024-06-10 10:16:31.450277] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:42.000 [2024-06-10 10:16:31.450304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:42.000 [2024-06-10 10:16:31.450319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:27:42.000 [2024-06-10 10:16:31.450332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:42.000 [2024-06-10 10:16:31.450896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.450908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.450919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.450931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.450942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.450954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.450966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.450978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.450989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:42.001 [2024-06-10 10:16:31.451524] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:42.001 [2024-06-10 10:16:31.451536] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 506b159a-1297-4728-99de-db62ad24bd2a 00:27:42.001 [2024-06-10 10:16:31.451548] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:27:42.001 [2024-06-10 10:16:31.451558] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 168640 00:27:42.001 [2024-06-10 10:16:31.451569] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 166656 00:27:42.001 [2024-06-10 10:16:31.451581] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0119 00:27:42.001 [2024-06-10 10:16:31.451591] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:42.001 [2024-06-10 10:16:31.451602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:42.001 [2024-06-10 10:16:31.451636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:42.001 [2024-06-10 10:16:31.451658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:42.001 [2024-06-10 10:16:31.451669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:42.001 [2024-06-10 10:16:31.451680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.001 [2024-06-10 10:16:31.451692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:42.001 [2024-06-10 10:16:31.451704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:27:42.001 [2024-06-10 10:16:31.451714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.001 [2024-06-10 10:16:31.468246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.001 [2024-06-10 10:16:31.468290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:42.001 [2024-06-10 10:16:31.468306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.485 ms 00:27:42.001 [2024-06-10 10:16:31.468317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.001 [2024-06-10 10:16:31.468788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.001 [2024-06-10 10:16:31.468830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:42.001 [2024-06-10 10:16:31.468844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:27:42.001 [2024-06-10 10:16:31.468855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.001 [2024-06-10 10:16:31.505806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.001 [2024-06-10 10:16:31.505867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:42.001 [2024-06-10 10:16:31.505886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.001 [2024-06-10 10:16:31.505904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.001 [2024-06-10 10:16:31.505990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.001 [2024-06-10 10:16:31.506005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:42.001 [2024-06-10 10:16:31.506017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.001 [2024-06-10 10:16:31.506027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.001 [2024-06-10 10:16:31.506125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.001 [2024-06-10 10:16:31.506144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:42.001 [2024-06-10 10:16:31.506156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.001 [2024-06-10 10:16:31.506167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.001 [2024-06-10 10:16:31.506196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.001 [2024-06-10 10:16:31.506209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:42.001 [2024-06-10 10:16:31.506221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.001 [2024-06-10 10:16:31.506231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.260 [2024-06-10 10:16:31.609720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.260 [2024-06-10 10:16:31.609790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:42.260 [2024-06-10 10:16:31.609811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.260 [2024-06-10 10:16:31.609831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.260 [2024-06-10 10:16:31.695805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.260 [2024-06-10 10:16:31.695873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:42.260 [2024-06-10 10:16:31.695893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.260 [2024-06-10 10:16:31.695905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.260 [2024-06-10 10:16:31.695986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.260 [2024-06-10 10:16:31.696002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:42.260 [2024-06-10 10:16:31.696015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.260 [2024-06-10 10:16:31.696026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.260 [2024-06-10 10:16:31.696078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.260 [2024-06-10 10:16:31.696093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:42.260 [2024-06-10 10:16:31.696104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.260 [2024-06-10 10:16:31.696115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.260 [2024-06-10 10:16:31.696235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.260 [2024-06-10 10:16:31.696255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:42.260 [2024-06-10 10:16:31.696268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.260 [2024-06-10 10:16:31.696279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.260 [2024-06-10 10:16:31.696330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.260 [2024-06-10 10:16:31.696354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:42.260 [2024-06-10 10:16:31.696366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.260 [2024-06-10 10:16:31.696377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.260 [2024-06-10 10:16:31.696422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.260 [2024-06-10 10:16:31.696439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:42.260 [2024-06-10 10:16:31.696451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.260 [2024-06-10 10:16:31.696462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.260 [2024-06-10 10:16:31.696516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.260 [2024-06-10 10:16:31.696533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:42.260 [2024-06-10 10:16:31.696546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.260 [2024-06-10 10:16:31.696556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.260 [2024-06-10 10:16:31.696720] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 449.702 ms, result 0 00:27:43.632 00:27:43.632 00:27:43.632 10:16:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:45.534 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:45.534 10:16:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:45.793 [2024-06-10 10:16:35.094055] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:27:45.793 [2024-06-10 10:16:35.094194] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85160 ] 00:27:45.793 [2024-06-10 10:16:35.259084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.049 [2024-06-10 10:16:35.487978] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.615 [2024-06-10 10:16:35.837433] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:46.615 [2024-06-10 10:16:35.837508] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:46.615 [2024-06-10 10:16:35.991930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.615 [2024-06-10 10:16:35.992000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:46.615 [2024-06-10 10:16:35.992021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:46.615 [2024-06-10 10:16:35.992035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.615 [2024-06-10 10:16:35.992111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.615 [2024-06-10 10:16:35.992132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:46.615 [2024-06-10 10:16:35.992146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:46.615 [2024-06-10 10:16:35.992159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.615 [2024-06-10 10:16:35.992195] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:46.615 [2024-06-10 10:16:35.993128] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:46.615 [2024-06-10 10:16:35.993165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.615 [2024-06-10 10:16:35.993180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:46.615 [2024-06-10 10:16:35.993198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:27:46.615 [2024-06-10 10:16:35.993211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.615 [2024-06-10 10:16:35.994254] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:46.615 [2024-06-10 10:16:36.010418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.615 [2024-06-10 10:16:36.010458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:46.615 [2024-06-10 10:16:36.010476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.165 ms 00:27:46.615 [2024-06-10 10:16:36.010490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.615 [2024-06-10 10:16:36.010564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.615 [2024-06-10 10:16:36.010585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:46.615 [2024-06-10 10:16:36.010599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:46.615 [2024-06-10 10:16:36.010616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.615 [2024-06-10 10:16:36.014926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.615 [2024-06-10 10:16:36.014969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:46.615 [2024-06-10 10:16:36.014986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.204 ms 00:27:46.615 [2024-06-10 10:16:36.014999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.615 [2024-06-10 10:16:36.015098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.615 [2024-06-10 10:16:36.015117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:46.615 [2024-06-10 10:16:36.015134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:46.615 [2024-06-10 10:16:36.015147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.615 [2024-06-10 10:16:36.015223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.615 [2024-06-10 10:16:36.015243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:46.615 [2024-06-10 10:16:36.015256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:46.616 [2024-06-10 10:16:36.015269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.616 [2024-06-10 10:16:36.015315] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:46.616 [2024-06-10 10:16:36.019567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.616 [2024-06-10 10:16:36.019601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:46.616 [2024-06-10 10:16:36.019617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.261 ms 00:27:46.616 [2024-06-10 10:16:36.019630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.616 [2024-06-10 10:16:36.019694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.616 [2024-06-10 10:16:36.019715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:46.616 [2024-06-10 10:16:36.019729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:46.616 [2024-06-10 10:16:36.019741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.616 [2024-06-10 10:16:36.019785] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:46.616 [2024-06-10 10:16:36.019817] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:46.616 [2024-06-10 10:16:36.019862] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:46.616 [2024-06-10 10:16:36.019883] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:46.616 [2024-06-10 10:16:36.019995] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:46.616 [2024-06-10 10:16:36.020012] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:46.616 [2024-06-10 10:16:36.020027] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:46.616 [2024-06-10 10:16:36.020044] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:46.616 [2024-06-10 10:16:36.020059] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:46.616 [2024-06-10 10:16:36.020073] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:46.616 [2024-06-10 10:16:36.020085] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:46.616 [2024-06-10 10:16:36.020108] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:46.616 [2024-06-10 10:16:36.020120] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:46.616 [2024-06-10 10:16:36.020133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.616 [2024-06-10 10:16:36.020145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:46.616 [2024-06-10 10:16:36.020162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:27:46.616 [2024-06-10 10:16:36.020174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.616 [2024-06-10 10:16:36.020267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.616 [2024-06-10 10:16:36.020294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:46.616 [2024-06-10 10:16:36.020307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:27:46.616 [2024-06-10 10:16:36.020319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.616 [2024-06-10 10:16:36.020422] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:46.616 [2024-06-10 10:16:36.020439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:46.616 [2024-06-10 10:16:36.020452] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:46.616 [2024-06-10 10:16:36.020470] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:46.616 [2024-06-10 10:16:36.020484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:46.616 [2024-06-10 10:16:36.020495] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:46.616 [2024-06-10 10:16:36.020507] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:46.616 [2024-06-10 10:16:36.020520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:46.616 [2024-06-10 10:16:36.020531] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:46.616 [2024-06-10 10:16:36.020543] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:46.616 [2024-06-10 10:16:36.020555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:46.616 [2024-06-10 10:16:36.020566] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:46.616 [2024-06-10 10:16:36.020577] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:46.616 [2024-06-10 10:16:36.020589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:46.616 [2024-06-10 10:16:36.020602] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:46.616 [2024-06-10 10:16:36.020614] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:46.616 [2024-06-10 10:16:36.020625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:46.616 [2024-06-10 10:16:36.020636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:46.616 [2024-06-10 10:16:36.020663] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:46.616 [2024-06-10 10:16:36.020695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:46.616 [2024-06-10 10:16:36.020706] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:46.616 [2024-06-10 10:16:36.020719] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:46.616 [2024-06-10 10:16:36.020743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:46.616 [2024-06-10 10:16:36.020756] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:46.616 [2024-06-10 10:16:36.020768] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:46.616 [2024-06-10 10:16:36.020780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:46.616 [2024-06-10 10:16:36.020802] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:46.616 [2024-06-10 10:16:36.020814] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:46.616 [2024-06-10 10:16:36.020826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:46.616 [2024-06-10 10:16:36.020837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:46.616 [2024-06-10 10:16:36.020849] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:46.616 [2024-06-10 10:16:36.020862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:46.616 [2024-06-10 10:16:36.020874] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:46.616 [2024-06-10 10:16:36.020886] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:46.616 [2024-06-10 10:16:36.020898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:46.616 [2024-06-10 10:16:36.020910] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:46.616 [2024-06-10 10:16:36.020921] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:46.616 [2024-06-10 10:16:36.020933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:46.616 [2024-06-10 10:16:36.020945] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:46.616 [2024-06-10 10:16:36.020957] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:46.616 [2024-06-10 10:16:36.020968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:46.616 [2024-06-10 10:16:36.020980] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:46.616 [2024-06-10 10:16:36.020993] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:46.616 [2024-06-10 10:16:36.021004] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:46.616 [2024-06-10 10:16:36.021017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:46.616 [2024-06-10 10:16:36.021030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:46.616 [2024-06-10 10:16:36.021043] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:46.616 [2024-06-10 10:16:36.021056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:46.616 [2024-06-10 10:16:36.021068] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:46.616 [2024-06-10 10:16:36.021080] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:46.616 [2024-06-10 10:16:36.021092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:46.616 [2024-06-10 10:16:36.021104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:46.616 [2024-06-10 10:16:36.021116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:46.616 [2024-06-10 10:16:36.021129] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:46.616 [2024-06-10 10:16:36.021144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:46.616 [2024-06-10 10:16:36.021158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:46.616 [2024-06-10 10:16:36.021170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:46.616 [2024-06-10 10:16:36.021183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:46.616 [2024-06-10 10:16:36.021195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:46.616 [2024-06-10 10:16:36.021207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:46.616 [2024-06-10 10:16:36.021220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:46.616 [2024-06-10 10:16:36.021232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:46.616 [2024-06-10 10:16:36.021244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:46.616 [2024-06-10 10:16:36.021257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:46.616 [2024-06-10 10:16:36.021269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:46.616 [2024-06-10 10:16:36.021282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:46.616 [2024-06-10 10:16:36.021295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:46.616 [2024-06-10 10:16:36.021307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:46.617 [2024-06-10 10:16:36.021319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:46.617 [2024-06-10 10:16:36.021331] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:46.617 [2024-06-10 10:16:36.021345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:46.617 [2024-06-10 10:16:36.021359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:46.617 [2024-06-10 10:16:36.021372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:46.617 [2024-06-10 10:16:36.021385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:46.617 [2024-06-10 10:16:36.021397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:46.617 [2024-06-10 10:16:36.021411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.617 [2024-06-10 10:16:36.021423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:46.617 [2024-06-10 10:16:36.021441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:27:46.617 [2024-06-10 10:16:36.021454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.617 [2024-06-10 10:16:36.059689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.617 [2024-06-10 10:16:36.059756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:46.617 [2024-06-10 10:16:36.059778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.147 ms 00:27:46.617 [2024-06-10 10:16:36.059792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.617 [2024-06-10 10:16:36.059915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.617 [2024-06-10 10:16:36.059932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:46.617 [2024-06-10 10:16:36.059946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:46.617 [2024-06-10 10:16:36.059958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.617 [2024-06-10 10:16:36.098770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.617 [2024-06-10 10:16:36.098825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:46.617 [2024-06-10 10:16:36.098845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.722 ms 00:27:46.617 [2024-06-10 10:16:36.098858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.617 [2024-06-10 10:16:36.098929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.617 [2024-06-10 10:16:36.098946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:46.617 [2024-06-10 10:16:36.098960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:46.617 [2024-06-10 10:16:36.098973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.617 [2024-06-10 10:16:36.099348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.617 [2024-06-10 10:16:36.099372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:46.617 [2024-06-10 10:16:36.099386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:27:46.617 [2024-06-10 10:16:36.099399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.617 [2024-06-10 10:16:36.099553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.617 [2024-06-10 10:16:36.099572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:46.617 [2024-06-10 10:16:36.099586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:27:46.617 [2024-06-10 10:16:36.099598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.617 [2024-06-10 10:16:36.115816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.617 [2024-06-10 10:16:36.115862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:46.617 [2024-06-10 10:16:36.115881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.190 ms 00:27:46.617 [2024-06-10 10:16:36.115894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.875 [2024-06-10 10:16:36.132400] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:46.876 [2024-06-10 10:16:36.132445] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:46.876 [2024-06-10 10:16:36.132469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.876 [2024-06-10 10:16:36.132482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:46.876 [2024-06-10 10:16:36.132497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.418 ms 00:27:46.876 [2024-06-10 10:16:36.132510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.876 [2024-06-10 10:16:36.162528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.876 [2024-06-10 10:16:36.162608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:46.876 [2024-06-10 10:16:36.162629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.964 ms 00:27:46.876 [2024-06-10 10:16:36.162667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.876 [2024-06-10 10:16:36.178724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.876 [2024-06-10 10:16:36.178773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:46.876 [2024-06-10 10:16:36.178792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.977 ms 00:27:46.876 [2024-06-10 10:16:36.178805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.876 [2024-06-10 10:16:36.194506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.876 [2024-06-10 10:16:36.194550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:46.876 [2024-06-10 10:16:36.194568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.643 ms 00:27:46.876 [2024-06-10 10:16:36.194581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.876 [2024-06-10 10:16:36.195423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.876 [2024-06-10 10:16:36.195462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:46.876 [2024-06-10 10:16:36.195479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:27:46.876 [2024-06-10 10:16:36.195491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.876 [2024-06-10 10:16:36.269389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.876 [2024-06-10 10:16:36.269470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:46.876 [2024-06-10 10:16:36.269491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.870 ms 00:27:46.876 [2024-06-10 10:16:36.269504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.876 [2024-06-10 10:16:36.282145] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:46.876 [2024-06-10 10:16:36.284820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.876 [2024-06-10 10:16:36.284861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:46.876 [2024-06-10 10:16:36.284881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.237 ms 00:27:46.876 [2024-06-10 10:16:36.284895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.876 [2024-06-10 10:16:36.285021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.876 [2024-06-10 10:16:36.285041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:46.876 [2024-06-10 10:16:36.285056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:46.876 [2024-06-10 10:16:36.285069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.876 [2024-06-10 10:16:36.285733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.876 [2024-06-10 10:16:36.285761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:46.876 [2024-06-10 10:16:36.285781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:27:46.876 [2024-06-10 10:16:36.285793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.876 [2024-06-10 10:16:36.285829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.876 [2024-06-10 10:16:36.285845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:46.876 [2024-06-10 10:16:36.285858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:46.876 [2024-06-10 10:16:36.285871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.876 [2024-06-10 10:16:36.285912] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:46.876 [2024-06-10 10:16:36.285931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.876 [2024-06-10 10:16:36.285943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:46.876 [2024-06-10 10:16:36.285956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:46.876 [2024-06-10 10:16:36.285972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.876 [2024-06-10 10:16:36.316955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.876 [2024-06-10 10:16:36.317018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:46.876 [2024-06-10 10:16:36.317037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.954 ms 00:27:46.876 [2024-06-10 10:16:36.317051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.876 [2024-06-10 10:16:36.317162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.876 [2024-06-10 10:16:36.317182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:46.876 [2024-06-10 10:16:36.317204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:46.876 [2024-06-10 10:16:36.317218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.876 [2024-06-10 10:16:36.318394] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 325.972 ms, result 0 00:28:26.290  Copying: 29/1024 [MB] (29 MBps) Copying: 54/1024 [MB] (25 MBps) Copying: 80/1024 [MB] (25 MBps) Copying: 107/1024 [MB] (27 MBps) Copying: 134/1024 [MB] (26 MBps) Copying: 160/1024 [MB] (26 MBps) Copying: 187/1024 [MB] (26 MBps) Copying: 213/1024 [MB] (25 MBps) Copying: 241/1024 [MB] (27 MBps) Copying: 265/1024 [MB] (24 MBps) Copying: 292/1024 [MB] (26 MBps) Copying: 317/1024 [MB] (25 MBps) Copying: 343/1024 [MB] (25 MBps) Copying: 368/1024 [MB] (25 MBps) Copying: 395/1024 [MB] (26 MBps) Copying: 421/1024 [MB] (25 MBps) Copying: 447/1024 [MB] (26 MBps) Copying: 472/1024 [MB] (25 MBps) Copying: 499/1024 [MB] (26 MBps) Copying: 524/1024 [MB] (24 MBps) Copying: 548/1024 [MB] (23 MBps) Copying: 574/1024 [MB] (26 MBps) Copying: 599/1024 [MB] (25 MBps) Copying: 624/1024 [MB] (25 MBps) Copying: 652/1024 [MB] (27 MBps) Copying: 679/1024 [MB] (26 MBps) Copying: 707/1024 [MB] (27 MBps) Copying: 733/1024 [MB] (26 MBps) Copying: 761/1024 [MB] (27 MBps) Copying: 787/1024 [MB] (25 MBps) Copying: 814/1024 [MB] (26 MBps) Copying: 840/1024 [MB] (26 MBps) Copying: 864/1024 [MB] (23 MBps) Copying: 891/1024 [MB] (26 MBps) Copying: 918/1024 [MB] (27 MBps) Copying: 943/1024 [MB] (24 MBps) Copying: 970/1024 [MB] (27 MBps) Copying: 997/1024 [MB] (26 MBps) Copying: 1022/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-06-10 10:17:15.601591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.290 [2024-06-10 10:17:15.601699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:26.290 [2024-06-10 10:17:15.601733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:26.290 [2024-06-10 10:17:15.601755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.290 [2024-06-10 10:17:15.601800] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:26.290 [2024-06-10 10:17:15.606662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.290 [2024-06-10 10:17:15.606712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:26.290 [2024-06-10 10:17:15.606738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.828 ms 00:28:26.290 [2024-06-10 10:17:15.606759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.290 [2024-06-10 10:17:15.607105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.290 [2024-06-10 10:17:15.607154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:26.290 [2024-06-10 10:17:15.607178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:28:26.290 [2024-06-10 10:17:15.607199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.290 [2024-06-10 10:17:15.611447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.290 [2024-06-10 10:17:15.611494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:26.290 [2024-06-10 10:17:15.611519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.201 ms 00:28:26.290 [2024-06-10 10:17:15.611540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.290 [2024-06-10 10:17:15.621983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.290 [2024-06-10 10:17:15.622034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:26.290 [2024-06-10 10:17:15.622084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.407 ms 00:28:26.290 [2024-06-10 10:17:15.622106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.290 [2024-06-10 10:17:15.656889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.290 [2024-06-10 10:17:15.656940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:26.290 [2024-06-10 10:17:15.656959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.686 ms 00:28:26.290 [2024-06-10 10:17:15.656972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.290 [2024-06-10 10:17:15.675149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.290 [2024-06-10 10:17:15.675215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:26.290 [2024-06-10 10:17:15.675234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.129 ms 00:28:26.290 [2024-06-10 10:17:15.675247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.290 [2024-06-10 10:17:15.678468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.291 [2024-06-10 10:17:15.678510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:26.291 [2024-06-10 10:17:15.678526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.189 ms 00:28:26.291 [2024-06-10 10:17:15.678540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.291 [2024-06-10 10:17:15.710035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.291 [2024-06-10 10:17:15.710087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:26.291 [2024-06-10 10:17:15.710105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.464 ms 00:28:26.291 [2024-06-10 10:17:15.710117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.291 [2024-06-10 10:17:15.741169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.291 [2024-06-10 10:17:15.741213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:26.291 [2024-06-10 10:17:15.741231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.002 ms 00:28:26.291 [2024-06-10 10:17:15.741243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.291 [2024-06-10 10:17:15.772484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.291 [2024-06-10 10:17:15.772572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:26.291 [2024-06-10 10:17:15.772593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.196 ms 00:28:26.291 [2024-06-10 10:17:15.772605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.291 [2024-06-10 10:17:15.803537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.291 [2024-06-10 10:17:15.803603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:26.291 [2024-06-10 10:17:15.803623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.807 ms 00:28:26.291 [2024-06-10 10:17:15.803636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.291 [2024-06-10 10:17:15.803700] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:26.291 [2024-06-10 10:17:15.803727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:26.291 [2024-06-10 10:17:15.803743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:28:26.291 [2024-06-10 10:17:15.803756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.803995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:26.291 [2024-06-10 10:17:15.804323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.804991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.805004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:26.292 [2024-06-10 10:17:15.805026] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:26.292 [2024-06-10 10:17:15.805038] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 506b159a-1297-4728-99de-db62ad24bd2a 00:28:26.292 [2024-06-10 10:17:15.805051] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:28:26.292 [2024-06-10 10:17:15.805063] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:26.292 [2024-06-10 10:17:15.805074] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:26.292 [2024-06-10 10:17:15.805094] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:26.292 [2024-06-10 10:17:15.805107] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:26.292 [2024-06-10 10:17:15.805119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:26.292 [2024-06-10 10:17:15.805132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:26.292 [2024-06-10 10:17:15.805143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:26.292 [2024-06-10 10:17:15.805154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:26.292 [2024-06-10 10:17:15.805165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.292 [2024-06-10 10:17:15.805177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:26.292 [2024-06-10 10:17:15.805190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:28:26.292 [2024-06-10 10:17:15.805203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.552 [2024-06-10 10:17:15.821757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.552 [2024-06-10 10:17:15.821817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:26.552 [2024-06-10 10:17:15.821837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.503 ms 00:28:26.552 [2024-06-10 10:17:15.821851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.552 [2024-06-10 10:17:15.822307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.552 [2024-06-10 10:17:15.822333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:26.552 [2024-06-10 10:17:15.822348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:28:26.552 [2024-06-10 10:17:15.822360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.552 [2024-06-10 10:17:15.859053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.552 [2024-06-10 10:17:15.859124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:26.552 [2024-06-10 10:17:15.859143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.552 [2024-06-10 10:17:15.859156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.552 [2024-06-10 10:17:15.859244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.552 [2024-06-10 10:17:15.859261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:26.552 [2024-06-10 10:17:15.859274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.552 [2024-06-10 10:17:15.859286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.552 [2024-06-10 10:17:15.859390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.552 [2024-06-10 10:17:15.859410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:26.552 [2024-06-10 10:17:15.859423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.552 [2024-06-10 10:17:15.859435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.552 [2024-06-10 10:17:15.859459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.552 [2024-06-10 10:17:15.859473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:26.552 [2024-06-10 10:17:15.859486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.552 [2024-06-10 10:17:15.859498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.552 [2024-06-10 10:17:15.957794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.552 [2024-06-10 10:17:15.957858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:26.552 [2024-06-10 10:17:15.957878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.552 [2024-06-10 10:17:15.957891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.552 [2024-06-10 10:17:16.041730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.552 [2024-06-10 10:17:16.041792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:26.552 [2024-06-10 10:17:16.041812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.552 [2024-06-10 10:17:16.041825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.552 [2024-06-10 10:17:16.041902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.552 [2024-06-10 10:17:16.041929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:26.552 [2024-06-10 10:17:16.041942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.552 [2024-06-10 10:17:16.041954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.552 [2024-06-10 10:17:16.042001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.552 [2024-06-10 10:17:16.042016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:26.552 [2024-06-10 10:17:16.042028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.552 [2024-06-10 10:17:16.042040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.552 [2024-06-10 10:17:16.042167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.552 [2024-06-10 10:17:16.042192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:26.552 [2024-06-10 10:17:16.042213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.552 [2024-06-10 10:17:16.042226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.552 [2024-06-10 10:17:16.042282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.552 [2024-06-10 10:17:16.042299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:26.552 [2024-06-10 10:17:16.042313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.552 [2024-06-10 10:17:16.042325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.552 [2024-06-10 10:17:16.042371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.552 [2024-06-10 10:17:16.042386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:26.552 [2024-06-10 10:17:16.042406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.553 [2024-06-10 10:17:16.042418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.553 [2024-06-10 10:17:16.042481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.553 [2024-06-10 10:17:16.042508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:26.553 [2024-06-10 10:17:16.042523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.553 [2024-06-10 10:17:16.042536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.553 [2024-06-10 10:17:16.042706] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 441.083 ms, result 0 00:28:27.938 00:28:27.938 00:28:27.938 10:17:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:30.485 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:30.485 Process with pid 83312 is not found 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 83312 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@949 -- # '[' -z 83312 ']' 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@953 -- # kill -0 83312 00:28:30.485 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 953: kill: (83312) - No such process 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@976 -- # echo 'Process with pid 83312 is not found' 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:30.485 Remove shared memory files 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:30.485 ************************************ 00:28:30.485 END TEST ftl_dirty_shutdown 00:28:30.485 ************************************ 00:28:30.485 00:28:30.485 real 3m48.202s 00:28:30.485 user 4m22.450s 00:28:30.485 sys 0m39.204s 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:30.485 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:30.743 10:17:20 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:30.743 10:17:20 ftl -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:28:30.743 10:17:20 ftl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:30.743 10:17:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:30.743 ************************************ 00:28:30.743 START TEST ftl_upgrade_shutdown 00:28:30.743 ************************************ 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1124 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:30.743 * Looking for test storage... 00:28:30.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85663 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85663 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@830 -- # '[' -z 85663 ']' 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:30.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:30.743 10:17:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:30.743 [2024-06-10 10:17:20.245312] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:30.743 [2024-06-10 10:17:20.245454] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85663 ] 00:28:31.002 [2024-06-10 10:17:20.408771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.262 [2024-06-10 10:17:20.642892] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@863 -- # return 0 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:32.198 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:32.456 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:32.456 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:32.456 10:17:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:32.456 10:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local bdev_name=basen1 00:28:32.456 10:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_info 00:28:32.456 10:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bs 00:28:32.456 10:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local nb 00:28:32.456 10:17:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:32.714 10:17:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:28:32.714 { 00:28:32.714 "name": "basen1", 00:28:32.714 "aliases": [ 00:28:32.714 "86f82c9b-efb3-489b-91cf-a3aec5e7f9fd" 00:28:32.714 ], 00:28:32.714 "product_name": "NVMe disk", 00:28:32.714 "block_size": 4096, 00:28:32.714 "num_blocks": 1310720, 00:28:32.714 "uuid": "86f82c9b-efb3-489b-91cf-a3aec5e7f9fd", 00:28:32.714 "assigned_rate_limits": { 00:28:32.714 "rw_ios_per_sec": 0, 00:28:32.714 "rw_mbytes_per_sec": 0, 00:28:32.714 "r_mbytes_per_sec": 0, 00:28:32.714 "w_mbytes_per_sec": 0 00:28:32.714 }, 00:28:32.714 "claimed": true, 00:28:32.714 "claim_type": "read_many_write_one", 00:28:32.714 "zoned": false, 00:28:32.714 "supported_io_types": { 00:28:32.714 "read": true, 00:28:32.714 "write": true, 00:28:32.714 "unmap": true, 00:28:32.714 "write_zeroes": true, 00:28:32.714 "flush": true, 00:28:32.714 "reset": true, 00:28:32.714 "compare": true, 00:28:32.714 "compare_and_write": false, 00:28:32.714 "abort": true, 00:28:32.714 "nvme_admin": true, 00:28:32.714 "nvme_io": true 00:28:32.714 }, 00:28:32.714 "driver_specific": { 00:28:32.714 "nvme": [ 00:28:32.714 { 00:28:32.714 "pci_address": "0000:00:11.0", 00:28:32.714 "trid": { 00:28:32.714 "trtype": "PCIe", 00:28:32.714 "traddr": "0000:00:11.0" 00:28:32.714 }, 00:28:32.714 "ctrlr_data": { 00:28:32.714 "cntlid": 0, 00:28:32.714 "vendor_id": "0x1b36", 00:28:32.714 "model_number": "QEMU NVMe Ctrl", 00:28:32.714 "serial_number": "12341", 00:28:32.714 "firmware_revision": "8.0.0", 00:28:32.714 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:32.714 "oacs": { 00:28:32.714 "security": 0, 00:28:32.714 "format": 1, 00:28:32.714 "firmware": 0, 00:28:32.714 "ns_manage": 1 00:28:32.714 }, 00:28:32.714 "multi_ctrlr": false, 00:28:32.714 "ana_reporting": false 00:28:32.714 }, 00:28:32.714 "vs": { 00:28:32.714 "nvme_version": "1.4" 00:28:32.714 }, 00:28:32.714 "ns_data": { 00:28:32.714 "id": 1, 00:28:32.714 "can_share": false 00:28:32.714 } 00:28:32.714 } 00:28:32.714 ], 00:28:32.714 "mp_policy": "active_passive" 00:28:32.714 } 00:28:32.714 } 00:28:32.714 ]' 00:28:32.714 10:17:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:28:32.714 10:17:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bs=4096 00:28:32.714 10:17:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:28:32.714 10:17:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # nb=1310720 00:28:32.714 10:17:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_size=5120 00:28:32.714 10:17:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # echo 5120 00:28:32.714 10:17:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:32.714 10:17:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:32.714 10:17:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:32.714 10:17:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:32.714 10:17:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:32.972 10:17:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=8d58bfa2-f4be-4730-b45d-b6010bef53e9 00:28:32.972 10:17:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:32.972 10:17:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8d58bfa2-f4be-4730-b45d-b6010bef53e9 00:28:33.230 10:17:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:33.488 10:17:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=72f17c32-a956-47a0-8bd1-314036ff935d 00:28:33.488 10:17:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 72f17c32-a956-47a0-8bd1-314036ff935d 00:28:33.746 10:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=0aaaee1a-af4e-4663-af3c-2d9c05f33116 00:28:33.746 10:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 0aaaee1a-af4e-4663-af3c-2d9c05f33116 ]] 00:28:33.746 10:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 0aaaee1a-af4e-4663-af3c-2d9c05f33116 5120 00:28:33.746 10:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:33.746 10:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:33.746 10:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=0aaaee1a-af4e-4663-af3c-2d9c05f33116 00:28:33.746 10:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:33.746 10:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 0aaaee1a-af4e-4663-af3c-2d9c05f33116 00:28:33.746 10:17:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local bdev_name=0aaaee1a-af4e-4663-af3c-2d9c05f33116 00:28:33.746 10:17:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_info 00:28:33.746 10:17:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bs 00:28:33.746 10:17:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local nb 00:28:33.746 10:17:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0aaaee1a-af4e-4663-af3c-2d9c05f33116 00:28:34.313 10:17:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:28:34.313 { 00:28:34.313 "name": "0aaaee1a-af4e-4663-af3c-2d9c05f33116", 00:28:34.313 "aliases": [ 00:28:34.313 "lvs/basen1p0" 00:28:34.313 ], 00:28:34.313 "product_name": "Logical Volume", 00:28:34.313 "block_size": 4096, 00:28:34.313 "num_blocks": 5242880, 00:28:34.313 "uuid": "0aaaee1a-af4e-4663-af3c-2d9c05f33116", 00:28:34.313 "assigned_rate_limits": { 00:28:34.313 "rw_ios_per_sec": 0, 00:28:34.313 "rw_mbytes_per_sec": 0, 00:28:34.313 "r_mbytes_per_sec": 0, 00:28:34.313 "w_mbytes_per_sec": 0 00:28:34.313 }, 00:28:34.313 "claimed": false, 00:28:34.313 "zoned": false, 00:28:34.313 "supported_io_types": { 00:28:34.313 "read": true, 00:28:34.313 "write": true, 00:28:34.313 "unmap": true, 00:28:34.313 "write_zeroes": true, 00:28:34.313 "flush": false, 00:28:34.313 "reset": true, 00:28:34.313 "compare": false, 00:28:34.313 "compare_and_write": false, 00:28:34.313 "abort": false, 00:28:34.313 "nvme_admin": false, 00:28:34.313 "nvme_io": false 00:28:34.313 }, 00:28:34.313 "driver_specific": { 00:28:34.313 "lvol": { 00:28:34.313 "lvol_store_uuid": "72f17c32-a956-47a0-8bd1-314036ff935d", 00:28:34.313 "base_bdev": "basen1", 00:28:34.313 "thin_provision": true, 00:28:34.313 "num_allocated_clusters": 0, 00:28:34.313 "snapshot": false, 00:28:34.313 "clone": false, 00:28:34.313 "esnap_clone": false 00:28:34.313 } 00:28:34.313 } 00:28:34.313 } 00:28:34.313 ]' 00:28:34.313 10:17:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:28:34.313 10:17:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bs=4096 00:28:34.313 10:17:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:28:34.313 10:17:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # nb=5242880 00:28:34.313 10:17:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_size=20480 00:28:34.313 10:17:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # echo 20480 00:28:34.313 10:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:34.313 10:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:34.313 10:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:34.571 10:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:34.571 10:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:34.571 10:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:34.829 10:17:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:34.829 10:17:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:34.829 10:17:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 0aaaee1a-af4e-4663-af3c-2d9c05f33116 -c cachen1p0 --l2p_dram_limit 2 00:28:35.143 [2024-06-10 10:17:24.552373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.143 [2024-06-10 10:17:24.552447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:35.143 [2024-06-10 10:17:24.552471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:35.143 [2024-06-10 10:17:24.552485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.143 [2024-06-10 10:17:24.552569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.143 [2024-06-10 10:17:24.552587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:35.143 [2024-06-10 10:17:24.552603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:28:35.143 [2024-06-10 10:17:24.552615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.143 [2024-06-10 10:17:24.552675] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:35.143 [2024-06-10 10:17:24.553665] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:35.143 [2024-06-10 10:17:24.553703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.143 [2024-06-10 10:17:24.553717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:35.143 [2024-06-10 10:17:24.553735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.063 ms 00:28:35.143 [2024-06-10 10:17:24.553747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.143 [2024-06-10 10:17:24.553889] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 419156a7-f19d-4252-a237-76bd50e15b42 00:28:35.143 [2024-06-10 10:17:24.554949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.143 [2024-06-10 10:17:24.554993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:35.143 [2024-06-10 10:17:24.555010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:35.143 [2024-06-10 10:17:24.555029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.143 [2024-06-10 10:17:24.559677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.143 [2024-06-10 10:17:24.559722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:35.143 [2024-06-10 10:17:24.559742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.581 ms 00:28:35.143 [2024-06-10 10:17:24.559755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.143 [2024-06-10 10:17:24.559818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.143 [2024-06-10 10:17:24.559840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:35.143 [2024-06-10 10:17:24.559853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:35.143 [2024-06-10 10:17:24.559868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.143 [2024-06-10 10:17:24.559960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.143 [2024-06-10 10:17:24.559986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:35.143 [2024-06-10 10:17:24.559999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:35.143 [2024-06-10 10:17:24.560013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.143 [2024-06-10 10:17:24.560048] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:35.143 [2024-06-10 10:17:24.564694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.143 [2024-06-10 10:17:24.564761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:35.143 [2024-06-10 10:17:24.564783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.651 ms 00:28:35.143 [2024-06-10 10:17:24.564795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.143 [2024-06-10 10:17:24.564852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.143 [2024-06-10 10:17:24.564868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:35.143 [2024-06-10 10:17:24.564882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:35.143 [2024-06-10 10:17:24.564893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.143 [2024-06-10 10:17:24.564946] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:35.143 [2024-06-10 10:17:24.565110] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:35.143 [2024-06-10 10:17:24.565131] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:35.143 [2024-06-10 10:17:24.565147] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:28:35.143 [2024-06-10 10:17:24.565166] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:35.143 [2024-06-10 10:17:24.565180] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:35.143 [2024-06-10 10:17:24.565196] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:35.143 [2024-06-10 10:17:24.565208] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:35.143 [2024-06-10 10:17:24.565221] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:35.143 [2024-06-10 10:17:24.565234] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:35.143 [2024-06-10 10:17:24.565255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.143 [2024-06-10 10:17:24.565267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:35.143 [2024-06-10 10:17:24.565282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.312 ms 00:28:35.143 [2024-06-10 10:17:24.565293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.143 [2024-06-10 10:17:24.565389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.143 [2024-06-10 10:17:24.565402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:35.143 [2024-06-10 10:17:24.565428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:28:35.143 [2024-06-10 10:17:24.565439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.143 [2024-06-10 10:17:24.565553] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:35.143 [2024-06-10 10:17:24.565579] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:35.143 [2024-06-10 10:17:24.565598] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:35.143 [2024-06-10 10:17:24.565611] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.143 [2024-06-10 10:17:24.565625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:35.143 [2024-06-10 10:17:24.565635] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:35.143 [2024-06-10 10:17:24.565671] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:35.143 [2024-06-10 10:17:24.565683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:35.143 [2024-06-10 10:17:24.565711] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:35.143 [2024-06-10 10:17:24.565722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.143 [2024-06-10 10:17:24.565734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:35.143 [2024-06-10 10:17:24.565745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:35.143 [2024-06-10 10:17:24.565757] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.143 [2024-06-10 10:17:24.565768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:35.143 [2024-06-10 10:17:24.565786] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:35.143 [2024-06-10 10:17:24.565797] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.143 [2024-06-10 10:17:24.565809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:35.143 [2024-06-10 10:17:24.565820] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:35.143 [2024-06-10 10:17:24.565835] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.143 [2024-06-10 10:17:24.565847] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:35.143 [2024-06-10 10:17:24.565859] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:35.143 [2024-06-10 10:17:24.565870] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:35.143 [2024-06-10 10:17:24.565882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:35.143 [2024-06-10 10:17:24.565893] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:35.143 [2024-06-10 10:17:24.565906] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:35.143 [2024-06-10 10:17:24.565917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:35.143 [2024-06-10 10:17:24.565929] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:35.143 [2024-06-10 10:17:24.565939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:35.144 [2024-06-10 10:17:24.565952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:35.144 [2024-06-10 10:17:24.565962] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:35.144 [2024-06-10 10:17:24.565974] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:35.144 [2024-06-10 10:17:24.565985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:35.144 [2024-06-10 10:17:24.565997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:35.144 [2024-06-10 10:17:24.566008] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.144 [2024-06-10 10:17:24.566024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:35.144 [2024-06-10 10:17:24.566035] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:35.144 [2024-06-10 10:17:24.566047] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.144 [2024-06-10 10:17:24.566058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:35.144 [2024-06-10 10:17:24.566071] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:35.144 [2024-06-10 10:17:24.566081] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.144 [2024-06-10 10:17:24.566094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:35.144 [2024-06-10 10:17:24.566105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:35.144 [2024-06-10 10:17:24.566117] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.144 [2024-06-10 10:17:24.566127] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:35.144 [2024-06-10 10:17:24.566140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:35.144 [2024-06-10 10:17:24.566151] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:35.144 [2024-06-10 10:17:24.566164] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.144 [2024-06-10 10:17:24.566176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:35.144 [2024-06-10 10:17:24.566189] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:35.144 [2024-06-10 10:17:24.566199] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:35.144 [2024-06-10 10:17:24.566214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:35.144 [2024-06-10 10:17:24.566225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:35.144 [2024-06-10 10:17:24.566238] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:35.144 [2024-06-10 10:17:24.566254] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:35.144 [2024-06-10 10:17:24.566270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:35.144 [2024-06-10 10:17:24.566284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:35.144 [2024-06-10 10:17:24.566298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:35.144 [2024-06-10 10:17:24.566309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:35.144 [2024-06-10 10:17:24.566324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:35.144 [2024-06-10 10:17:24.566336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:35.144 [2024-06-10 10:17:24.566351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:35.144 [2024-06-10 10:17:24.566363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:35.144 [2024-06-10 10:17:24.566376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:35.144 [2024-06-10 10:17:24.566387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:35.144 [2024-06-10 10:17:24.566401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:35.144 [2024-06-10 10:17:24.566412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:35.144 [2024-06-10 10:17:24.566428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:35.144 [2024-06-10 10:17:24.566439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:35.144 [2024-06-10 10:17:24.566453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:35.144 [2024-06-10 10:17:24.566464] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:35.144 [2024-06-10 10:17:24.566482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:35.144 [2024-06-10 10:17:24.566494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:35.144 [2024-06-10 10:17:24.566508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:35.144 [2024-06-10 10:17:24.566520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:35.144 [2024-06-10 10:17:24.566534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:35.144 [2024-06-10 10:17:24.566547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.144 [2024-06-10 10:17:24.566561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:35.144 [2024-06-10 10:17:24.566573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.064 ms 00:28:35.144 [2024-06-10 10:17:24.566586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.144 [2024-06-10 10:17:24.566658] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:35.144 [2024-06-10 10:17:24.566680] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:38.450 [2024-06-10 10:17:27.597722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.450 [2024-06-10 10:17:27.597817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:38.450 [2024-06-10 10:17:27.597872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3031.083 ms 00:28:38.450 [2024-06-10 10:17:27.597887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.450 [2024-06-10 10:17:27.632939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.450 [2024-06-10 10:17:27.633013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:38.450 [2024-06-10 10:17:27.633036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.739 ms 00:28:38.450 [2024-06-10 10:17:27.633051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.450 [2024-06-10 10:17:27.633185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.450 [2024-06-10 10:17:27.633208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:38.450 [2024-06-10 10:17:27.633223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:38.450 [2024-06-10 10:17:27.633252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.450 [2024-06-10 10:17:27.673307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.450 [2024-06-10 10:17:27.673394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:38.450 [2024-06-10 10:17:27.673431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.966 ms 00:28:38.450 [2024-06-10 10:17:27.673445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.450 [2024-06-10 10:17:27.673515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.450 [2024-06-10 10:17:27.673535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:38.450 [2024-06-10 10:17:27.673551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:38.450 [2024-06-10 10:17:27.673566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.450 [2024-06-10 10:17:27.674050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.450 [2024-06-10 10:17:27.674105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:38.450 [2024-06-10 10:17:27.674139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.401 ms 00:28:38.450 [2024-06-10 10:17:27.674159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.450 [2024-06-10 10:17:27.674227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.450 [2024-06-10 10:17:27.674262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:38.450 [2024-06-10 10:17:27.674288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:38.450 [2024-06-10 10:17:27.674315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.450 [2024-06-10 10:17:27.691992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.450 [2024-06-10 10:17:27.692049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:38.450 [2024-06-10 10:17:27.692071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.645 ms 00:28:38.450 [2024-06-10 10:17:27.692086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.450 [2024-06-10 10:17:27.706072] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:38.450 [2024-06-10 10:17:27.707028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.450 [2024-06-10 10:17:27.707073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:38.450 [2024-06-10 10:17:27.707093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.822 ms 00:28:38.450 [2024-06-10 10:17:27.707106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.450 [2024-06-10 10:17:27.748853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.450 [2024-06-10 10:17:27.748933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:38.450 [2024-06-10 10:17:27.748957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.684 ms 00:28:38.450 [2024-06-10 10:17:27.748971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.450 [2024-06-10 10:17:27.749087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.450 [2024-06-10 10:17:27.749106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:38.450 [2024-06-10 10:17:27.749125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:28:38.451 [2024-06-10 10:17:27.749137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.451 [2024-06-10 10:17:27.782288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.451 [2024-06-10 10:17:27.782350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:38.451 [2024-06-10 10:17:27.782389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.073 ms 00:28:38.451 [2024-06-10 10:17:27.782402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.451 [2024-06-10 10:17:27.814489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.451 [2024-06-10 10:17:27.814582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:38.451 [2024-06-10 10:17:27.814621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.018 ms 00:28:38.451 [2024-06-10 10:17:27.814634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.451 [2024-06-10 10:17:27.815506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.451 [2024-06-10 10:17:27.815576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:38.451 [2024-06-10 10:17:27.815611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.787 ms 00:28:38.451 [2024-06-10 10:17:27.815623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.451 [2024-06-10 10:17:27.927472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.451 [2024-06-10 10:17:27.927557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:38.451 [2024-06-10 10:17:27.927581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 111.705 ms 00:28:38.451 [2024-06-10 10:17:27.927594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.451 [2024-06-10 10:17:27.959538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.451 [2024-06-10 10:17:27.959606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:38.451 [2024-06-10 10:17:27.959645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.900 ms 00:28:38.451 [2024-06-10 10:17:27.959669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.709 [2024-06-10 10:17:27.991515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.709 [2024-06-10 10:17:27.991578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:38.709 [2024-06-10 10:17:27.991610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.804 ms 00:28:38.709 [2024-06-10 10:17:27.991622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.709 [2024-06-10 10:17:28.022800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.709 [2024-06-10 10:17:28.022848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:38.709 [2024-06-10 10:17:28.022871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.115 ms 00:28:38.709 [2024-06-10 10:17:28.022884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.709 [2024-06-10 10:17:28.022924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.709 [2024-06-10 10:17:28.022939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:38.709 [2024-06-10 10:17:28.022955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:38.709 [2024-06-10 10:17:28.022971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.709 [2024-06-10 10:17:28.023118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.709 [2024-06-10 10:17:28.023141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:38.709 [2024-06-10 10:17:28.023157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:28:38.709 [2024-06-10 10:17:28.023171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.709 [2024-06-10 10:17:28.024317] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3471.470 ms, result 0 00:28:38.709 { 00:28:38.709 "name": "ftl", 00:28:38.709 "uuid": "419156a7-f19d-4252-a237-76bd50e15b42" 00:28:38.709 } 00:28:38.709 10:17:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:38.968 [2024-06-10 10:17:28.311524] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.968 10:17:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:39.227 10:17:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:39.486 [2024-06-10 10:17:28.912424] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:39.486 10:17:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:39.746 [2024-06-10 10:17:29.206210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:39.746 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:40.313 Fill FTL, iteration 1 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=85791 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 85791 /var/tmp/spdk.tgt.sock 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@830 -- # '[' -z 85791 ']' 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:40.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:40.313 10:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:40.313 [2024-06-10 10:17:29.762884] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:40.313 [2024-06-10 10:17:29.763071] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85791 ] 00:28:40.571 [2024-06-10 10:17:29.928108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.830 [2024-06-10 10:17:30.134547] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.397 10:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:41.397 10:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@863 -- # return 0 00:28:41.397 10:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:41.655 ftln1 00:28:41.914 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:41.914 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:42.173 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:42.173 10:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 85791 00:28:42.173 10:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@949 -- # '[' -z 85791 ']' 00:28:42.173 10:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # kill -0 85791 00:28:42.173 10:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # uname 00:28:42.173 10:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:42.173 10:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 85791 00:28:42.173 10:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:42.173 10:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:42.173 killing process with pid 85791 00:28:42.173 10:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # echo 'killing process with pid 85791' 00:28:42.173 10:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # kill 85791 00:28:42.173 10:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # wait 85791 00:28:44.076 10:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:44.076 10:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:44.334 [2024-06-10 10:17:33.652586] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:44.334 [2024-06-10 10:17:33.652734] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85844 ] 00:28:44.334 [2024-06-10 10:17:33.816129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.592 [2024-06-10 10:17:34.002029] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.246  Copying: 210/1024 [MB] (210 MBps) Copying: 422/1024 [MB] (212 MBps) Copying: 637/1024 [MB] (215 MBps) Copying: 846/1024 [MB] (209 MBps) Copying: 1024/1024 [MB] (average 210 MBps) 00:28:51.246 00:28:51.246 Calculate MD5 checksum, iteration 1 00:28:51.246 10:17:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:51.246 10:17:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:51.247 10:17:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:51.247 10:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:51.247 10:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:51.247 10:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:51.247 10:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:51.247 10:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:51.247 [2024-06-10 10:17:40.513992] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:51.247 [2024-06-10 10:17:40.514168] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85914 ] 00:28:51.247 [2024-06-10 10:17:40.690036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.505 [2024-06-10 10:17:40.910290] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.009  Copying: 513/1024 [MB] (513 MBps) Copying: 972/1024 [MB] (459 MBps) Copying: 1024/1024 [MB] (average 480 MBps) 00:28:55.009 00:28:55.009 10:17:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:55.009 10:17:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:57.541 10:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:57.541 Fill FTL, iteration 2 00:28:57.541 10:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=73b11a823660032fbc8f9b2821714709 00:28:57.541 10:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:57.541 10:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:57.541 10:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:57.541 10:17:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:57.541 10:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:57.541 10:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:57.541 10:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:57.541 10:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:57.542 10:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:57.542 [2024-06-10 10:17:46.749223] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:57.542 [2024-06-10 10:17:46.749362] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85977 ] 00:28:57.542 [2024-06-10 10:17:46.910492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.800 [2024-06-10 10:17:47.143293] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.460  Copying: 212/1024 [MB] (212 MBps) Copying: 426/1024 [MB] (214 MBps) Copying: 620/1024 [MB] (194 MBps) Copying: 826/1024 [MB] (206 MBps) Copying: 1024/1024 [MB] (average 204 MBps) 00:29:04.460 00:29:04.460 Calculate MD5 checksum, iteration 2 00:29:04.460 10:17:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:04.460 10:17:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:04.460 10:17:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:04.460 10:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:04.460 10:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:04.460 10:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:04.460 10:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:04.460 10:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:04.460 [2024-06-10 10:17:53.889508] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:29:04.460 [2024-06-10 10:17:53.889664] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86048 ] 00:29:04.719 [2024-06-10 10:17:54.048791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.977 [2024-06-10 10:17:54.240587] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.657  Copying: 484/1024 [MB] (484 MBps) Copying: 961/1024 [MB] (477 MBps) Copying: 1024/1024 [MB] (average 477 MBps) 00:29:09.657 00:29:09.657 10:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:09.657 10:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:11.558 10:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:11.558 10:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ff9106411f7173e38ffe467bc51ddfa8 00:29:11.558 10:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:11.558 10:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:11.558 10:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:11.816 [2024-06-10 10:18:01.211245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.816 [2024-06-10 10:18:01.211327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:11.816 [2024-06-10 10:18:01.211352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:29:11.816 [2024-06-10 10:18:01.211367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.816 [2024-06-10 10:18:01.211416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.816 [2024-06-10 10:18:01.211434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:11.816 [2024-06-10 10:18:01.211450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:11.816 [2024-06-10 10:18:01.211463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.816 [2024-06-10 10:18:01.211505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.816 [2024-06-10 10:18:01.211527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:11.816 [2024-06-10 10:18:01.211557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:11.816 [2024-06-10 10:18:01.211571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.816 [2024-06-10 10:18:01.211684] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.424 ms, result 0 00:29:11.816 true 00:29:11.816 10:18:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:12.074 { 00:29:12.074 "name": "ftl", 00:29:12.074 "properties": [ 00:29:12.074 { 00:29:12.074 "name": "superblock_version", 00:29:12.074 "value": 5, 00:29:12.074 "read-only": true 00:29:12.074 }, 00:29:12.074 { 00:29:12.074 "name": "base_device", 00:29:12.074 "bands": [ 00:29:12.074 { 00:29:12.074 "id": 0, 00:29:12.074 "state": "FREE", 00:29:12.074 "validity": 0.0 00:29:12.074 }, 00:29:12.074 { 00:29:12.074 "id": 1, 00:29:12.074 "state": "FREE", 00:29:12.074 "validity": 0.0 00:29:12.074 }, 00:29:12.074 { 00:29:12.074 "id": 2, 00:29:12.074 "state": "FREE", 00:29:12.074 "validity": 0.0 00:29:12.074 }, 00:29:12.074 { 00:29:12.074 "id": 3, 00:29:12.074 "state": "FREE", 00:29:12.074 "validity": 0.0 00:29:12.074 }, 00:29:12.074 { 00:29:12.074 "id": 4, 00:29:12.074 "state": "FREE", 00:29:12.074 "validity": 0.0 00:29:12.074 }, 00:29:12.074 { 00:29:12.074 "id": 5, 00:29:12.074 "state": "FREE", 00:29:12.074 "validity": 0.0 00:29:12.074 }, 00:29:12.074 { 00:29:12.074 "id": 6, 00:29:12.074 "state": "FREE", 00:29:12.074 "validity": 0.0 00:29:12.074 }, 00:29:12.074 { 00:29:12.074 "id": 7, 00:29:12.074 "state": "FREE", 00:29:12.074 "validity": 0.0 00:29:12.074 }, 00:29:12.074 { 00:29:12.074 "id": 8, 00:29:12.074 "state": "FREE", 00:29:12.074 "validity": 0.0 00:29:12.074 }, 00:29:12.074 { 00:29:12.074 "id": 9, 00:29:12.074 "state": "FREE", 00:29:12.074 "validity": 0.0 00:29:12.074 }, 00:29:12.074 { 00:29:12.074 "id": 10, 00:29:12.074 "state": "FREE", 00:29:12.074 "validity": 0.0 00:29:12.074 }, 00:29:12.074 { 00:29:12.074 "id": 11, 00:29:12.074 "state": "FREE", 00:29:12.074 "validity": 0.0 00:29:12.074 }, 00:29:12.074 { 00:29:12.074 "id": 12, 00:29:12.074 "state": "FREE", 00:29:12.074 "validity": 0.0 00:29:12.074 }, 00:29:12.074 { 00:29:12.074 "id": 13, 00:29:12.074 "state": "FREE", 00:29:12.074 "validity": 0.0 00:29:12.074 }, 00:29:12.074 { 00:29:12.074 "id": 14, 00:29:12.075 "state": "FREE", 00:29:12.075 "validity": 0.0 00:29:12.075 }, 00:29:12.075 { 00:29:12.075 "id": 15, 00:29:12.075 "state": "FREE", 00:29:12.075 "validity": 0.0 00:29:12.075 }, 00:29:12.075 { 00:29:12.075 "id": 16, 00:29:12.075 "state": "FREE", 00:29:12.075 "validity": 0.0 00:29:12.075 }, 00:29:12.075 { 00:29:12.075 "id": 17, 00:29:12.075 "state": "FREE", 00:29:12.075 "validity": 0.0 00:29:12.075 } 00:29:12.075 ], 00:29:12.075 "read-only": true 00:29:12.075 }, 00:29:12.075 { 00:29:12.075 "name": "cache_device", 00:29:12.075 "type": "bdev", 00:29:12.075 "chunks": [ 00:29:12.075 { 00:29:12.075 "id": 0, 00:29:12.075 "state": "INACTIVE", 00:29:12.075 "utilization": 0.0 00:29:12.075 }, 00:29:12.075 { 00:29:12.075 "id": 1, 00:29:12.075 "state": "CLOSED", 00:29:12.075 "utilization": 1.0 00:29:12.075 }, 00:29:12.075 { 00:29:12.075 "id": 2, 00:29:12.075 "state": "CLOSED", 00:29:12.075 "utilization": 1.0 00:29:12.075 }, 00:29:12.075 { 00:29:12.075 "id": 3, 00:29:12.075 "state": "OPEN", 00:29:12.075 "utilization": 0.001953125 00:29:12.075 }, 00:29:12.075 { 00:29:12.075 "id": 4, 00:29:12.075 "state": "OPEN", 00:29:12.075 "utilization": 0.0 00:29:12.075 } 00:29:12.075 ], 00:29:12.075 "read-only": true 00:29:12.075 }, 00:29:12.075 { 00:29:12.075 "name": "verbose_mode", 00:29:12.075 "value": true, 00:29:12.075 "unit": "", 00:29:12.075 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:12.075 }, 00:29:12.075 { 00:29:12.075 "name": "prep_upgrade_on_shutdown", 00:29:12.075 "value": false, 00:29:12.075 "unit": "", 00:29:12.075 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:12.075 } 00:29:12.075 ] 00:29:12.075 } 00:29:12.075 10:18:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:12.333 [2024-06-10 10:18:01.744688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.333 [2024-06-10 10:18:01.744775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:12.333 [2024-06-10 10:18:01.744815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:29:12.333 [2024-06-10 10:18:01.744843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.333 [2024-06-10 10:18:01.744911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.333 [2024-06-10 10:18:01.744943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:12.333 [2024-06-10 10:18:01.744967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:12.333 [2024-06-10 10:18:01.744988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.333 [2024-06-10 10:18:01.745046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.333 [2024-06-10 10:18:01.745074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:12.333 [2024-06-10 10:18:01.745099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:12.333 [2024-06-10 10:18:01.745123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.333 [2024-06-10 10:18:01.745251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.557 ms, result 0 00:29:12.333 true 00:29:12.333 10:18:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:12.333 10:18:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:12.333 10:18:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:12.591 10:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:12.591 10:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:12.591 10:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:12.849 [2024-06-10 10:18:02.357447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.849 [2024-06-10 10:18:02.357509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:12.849 [2024-06-10 10:18:02.357530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:12.849 [2024-06-10 10:18:02.357541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.849 [2024-06-10 10:18:02.357576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.849 [2024-06-10 10:18:02.357591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:12.849 [2024-06-10 10:18:02.357604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:12.849 [2024-06-10 10:18:02.357615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.849 [2024-06-10 10:18:02.357660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.849 [2024-06-10 10:18:02.357681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:12.849 [2024-06-10 10:18:02.357693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:12.849 [2024-06-10 10:18:02.357703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.849 [2024-06-10 10:18:02.357779] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.319 ms, result 0 00:29:12.849 true 00:29:13.107 10:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:13.366 { 00:29:13.366 "name": "ftl", 00:29:13.366 "properties": [ 00:29:13.366 { 00:29:13.366 "name": "superblock_version", 00:29:13.366 "value": 5, 00:29:13.366 "read-only": true 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "name": "base_device", 00:29:13.366 "bands": [ 00:29:13.366 { 00:29:13.366 "id": 0, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 1, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 2, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 3, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 4, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 5, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 6, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 7, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 8, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 9, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 10, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 11, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 12, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 13, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 14, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 15, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 16, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 17, 00:29:13.366 "state": "FREE", 00:29:13.366 "validity": 0.0 00:29:13.366 } 00:29:13.366 ], 00:29:13.366 "read-only": true 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "name": "cache_device", 00:29:13.366 "type": "bdev", 00:29:13.366 "chunks": [ 00:29:13.366 { 00:29:13.366 "id": 0, 00:29:13.366 "state": "INACTIVE", 00:29:13.366 "utilization": 0.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 1, 00:29:13.366 "state": "CLOSED", 00:29:13.366 "utilization": 1.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 2, 00:29:13.366 "state": "CLOSED", 00:29:13.366 "utilization": 1.0 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 3, 00:29:13.366 "state": "OPEN", 00:29:13.366 "utilization": 0.001953125 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "id": 4, 00:29:13.366 "state": "OPEN", 00:29:13.366 "utilization": 0.0 00:29:13.366 } 00:29:13.366 ], 00:29:13.366 "read-only": true 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "name": "verbose_mode", 00:29:13.366 "value": true, 00:29:13.366 "unit": "", 00:29:13.366 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:13.366 }, 00:29:13.366 { 00:29:13.366 "name": "prep_upgrade_on_shutdown", 00:29:13.366 "value": true, 00:29:13.366 "unit": "", 00:29:13.366 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:13.366 } 00:29:13.366 ] 00:29:13.366 } 00:29:13.366 10:18:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:13.366 10:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85663 ]] 00:29:13.366 10:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85663 00:29:13.366 10:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@949 -- # '[' -z 85663 ']' 00:29:13.366 10:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # kill -0 85663 00:29:13.366 10:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # uname 00:29:13.366 10:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:13.366 10:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 85663 00:29:13.366 killing process with pid 85663 00:29:13.366 10:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:13.366 10:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:13.366 10:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # echo 'killing process with pid 85663' 00:29:13.366 10:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # kill 85663 00:29:13.366 10:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # wait 85663 00:29:14.323 [2024-06-10 10:18:03.684462] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:14.323 [2024-06-10 10:18:03.709150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.323 [2024-06-10 10:18:03.715292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:14.323 [2024-06-10 10:18:03.715382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:14.323 [2024-06-10 10:18:03.715410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.323 [2024-06-10 10:18:03.715485] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:14.323 [2024-06-10 10:18:03.719765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.323 [2024-06-10 10:18:03.719822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:14.323 [2024-06-10 10:18:03.719848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.240 ms 00:29:14.323 [2024-06-10 10:18:03.719867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.351 [2024-06-10 10:18:12.374928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.352 [2024-06-10 10:18:12.375007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:24.352 [2024-06-10 10:18:12.375030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8655.062 ms 00:29:24.352 [2024-06-10 10:18:12.375043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.376313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.352 [2024-06-10 10:18:12.376355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:24.352 [2024-06-10 10:18:12.376370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.246 ms 00:29:24.352 [2024-06-10 10:18:12.376390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.377661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.352 [2024-06-10 10:18:12.377695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:24.352 [2024-06-10 10:18:12.377710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.230 ms 00:29:24.352 [2024-06-10 10:18:12.377722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.390360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.352 [2024-06-10 10:18:12.390404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:24.352 [2024-06-10 10:18:12.390421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.583 ms 00:29:24.352 [2024-06-10 10:18:12.390433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.398212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.352 [2024-06-10 10:18:12.398263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:24.352 [2024-06-10 10:18:12.398280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.735 ms 00:29:24.352 [2024-06-10 10:18:12.398292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.398411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.352 [2024-06-10 10:18:12.398432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:24.352 [2024-06-10 10:18:12.398445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.077 ms 00:29:24.352 [2024-06-10 10:18:12.398456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.410763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.352 [2024-06-10 10:18:12.410803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:29:24.352 [2024-06-10 10:18:12.410819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.285 ms 00:29:24.352 [2024-06-10 10:18:12.410831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.423240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.352 [2024-06-10 10:18:12.423279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:29:24.352 [2024-06-10 10:18:12.423294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.369 ms 00:29:24.352 [2024-06-10 10:18:12.423305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.435596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.352 [2024-06-10 10:18:12.435668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:24.352 [2024-06-10 10:18:12.435687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.247 ms 00:29:24.352 [2024-06-10 10:18:12.435698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.448307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.352 [2024-06-10 10:18:12.448373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:24.352 [2024-06-10 10:18:12.448392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.484 ms 00:29:24.352 [2024-06-10 10:18:12.448405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.448449] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:24.352 [2024-06-10 10:18:12.448473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:24.352 [2024-06-10 10:18:12.448487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:24.352 [2024-06-10 10:18:12.448500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:24.352 [2024-06-10 10:18:12.448512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:24.352 [2024-06-10 10:18:12.448524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:24.352 [2024-06-10 10:18:12.448535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:24.352 [2024-06-10 10:18:12.448547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:24.352 [2024-06-10 10:18:12.448558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:24.352 [2024-06-10 10:18:12.448570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:24.352 [2024-06-10 10:18:12.448582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:24.352 [2024-06-10 10:18:12.448593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:24.352 [2024-06-10 10:18:12.448605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:24.352 [2024-06-10 10:18:12.448616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:24.352 [2024-06-10 10:18:12.448628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:24.352 [2024-06-10 10:18:12.448650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:24.352 [2024-06-10 10:18:12.448682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:24.352 [2024-06-10 10:18:12.448694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:24.352 [2024-06-10 10:18:12.448706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:24.352 [2024-06-10 10:18:12.448720] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:24.352 [2024-06-10 10:18:12.448738] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 419156a7-f19d-4252-a237-76bd50e15b42 00:29:24.352 [2024-06-10 10:18:12.448750] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:24.352 [2024-06-10 10:18:12.448761] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:24.352 [2024-06-10 10:18:12.448772] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:24.352 [2024-06-10 10:18:12.448783] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:24.352 [2024-06-10 10:18:12.448794] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:24.352 [2024-06-10 10:18:12.448805] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:24.352 [2024-06-10 10:18:12.448816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:24.352 [2024-06-10 10:18:12.448826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:24.352 [2024-06-10 10:18:12.448835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:24.352 [2024-06-10 10:18:12.448846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.352 [2024-06-10 10:18:12.448858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:24.352 [2024-06-10 10:18:12.448869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.400 ms 00:29:24.352 [2024-06-10 10:18:12.448880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.467605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.352 [2024-06-10 10:18:12.467684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:24.352 [2024-06-10 10:18:12.467704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.670 ms 00:29:24.352 [2024-06-10 10:18:12.467716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.468171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.352 [2024-06-10 10:18:12.468199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:24.352 [2024-06-10 10:18:12.468213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.405 ms 00:29:24.352 [2024-06-10 10:18:12.468236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.520515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:24.352 [2024-06-10 10:18:12.520590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:24.352 [2024-06-10 10:18:12.520608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:24.352 [2024-06-10 10:18:12.520621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.520698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:24.352 [2024-06-10 10:18:12.520715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:24.352 [2024-06-10 10:18:12.520727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:24.352 [2024-06-10 10:18:12.520746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.520858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:24.352 [2024-06-10 10:18:12.520877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:24.352 [2024-06-10 10:18:12.520889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:24.352 [2024-06-10 10:18:12.520900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.520925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:24.352 [2024-06-10 10:18:12.520938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:24.352 [2024-06-10 10:18:12.520950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:24.352 [2024-06-10 10:18:12.520960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.622766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:24.352 [2024-06-10 10:18:12.622840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:24.352 [2024-06-10 10:18:12.622860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:24.352 [2024-06-10 10:18:12.622872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.708860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:24.352 [2024-06-10 10:18:12.708933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:24.352 [2024-06-10 10:18:12.708954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:24.352 [2024-06-10 10:18:12.708978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.352 [2024-06-10 10:18:12.709105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:24.352 [2024-06-10 10:18:12.709123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:24.352 [2024-06-10 10:18:12.709136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:24.352 [2024-06-10 10:18:12.709147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.353 [2024-06-10 10:18:12.709203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:24.353 [2024-06-10 10:18:12.709218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:24.353 [2024-06-10 10:18:12.709230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:24.353 [2024-06-10 10:18:12.709241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.353 [2024-06-10 10:18:12.709374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:24.353 [2024-06-10 10:18:12.709403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:24.353 [2024-06-10 10:18:12.709417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:24.353 [2024-06-10 10:18:12.709429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.353 [2024-06-10 10:18:12.709493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:24.353 [2024-06-10 10:18:12.709519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:24.353 [2024-06-10 10:18:12.709532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:24.353 [2024-06-10 10:18:12.709544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.353 [2024-06-10 10:18:12.709596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:24.353 [2024-06-10 10:18:12.709613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:24.353 [2024-06-10 10:18:12.709625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:24.353 [2024-06-10 10:18:12.709636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.353 [2024-06-10 10:18:12.709710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:24.353 [2024-06-10 10:18:12.709727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:24.353 [2024-06-10 10:18:12.709739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:24.353 [2024-06-10 10:18:12.709751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.353 [2024-06-10 10:18:12.709893] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9000.773 ms, result 0 00:29:26.884 10:18:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:26.884 10:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:26.884 10:18:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:26.884 10:18:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:26.884 10:18:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:26.884 10:18:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86272 00:29:26.884 10:18:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:26.884 10:18:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86272 00:29:26.884 10:18:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@830 -- # '[' -z 86272 ']' 00:29:26.884 10:18:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.884 10:18:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:26.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.884 10:18:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.884 10:18:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:26.884 10:18:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:26.884 10:18:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:26.884 [2024-06-10 10:18:16.241467] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:29:26.884 [2024-06-10 10:18:16.241659] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86272 ] 00:29:27.142 [2024-06-10 10:18:16.417779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.142 [2024-06-10 10:18:16.652243] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.074 [2024-06-10 10:18:17.450482] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:28.074 [2024-06-10 10:18:17.450557] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:28.333 [2024-06-10 10:18:17.592209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.333 [2024-06-10 10:18:17.592268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:28.333 [2024-06-10 10:18:17.592290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:28.333 [2024-06-10 10:18:17.592307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.333 [2024-06-10 10:18:17.592409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.333 [2024-06-10 10:18:17.592432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:28.333 [2024-06-10 10:18:17.592445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:29:28.333 [2024-06-10 10:18:17.592457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.333 [2024-06-10 10:18:17.592505] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:28.333 [2024-06-10 10:18:17.593528] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:28.333 [2024-06-10 10:18:17.593571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.333 [2024-06-10 10:18:17.593596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:28.333 [2024-06-10 10:18:17.593610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.077 ms 00:29:28.333 [2024-06-10 10:18:17.593620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.333 [2024-06-10 10:18:17.594989] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:28.333 [2024-06-10 10:18:17.612852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.333 [2024-06-10 10:18:17.612904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:28.333 [2024-06-10 10:18:17.612931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.865 ms 00:29:28.333 [2024-06-10 10:18:17.612949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.333 [2024-06-10 10:18:17.613040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.333 [2024-06-10 10:18:17.613061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:28.333 [2024-06-10 10:18:17.613079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:29:28.333 [2024-06-10 10:18:17.613090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.333 [2024-06-10 10:18:17.617753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.333 [2024-06-10 10:18:17.617801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:28.333 [2024-06-10 10:18:17.617818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.551 ms 00:29:28.333 [2024-06-10 10:18:17.617830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.333 [2024-06-10 10:18:17.617934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.333 [2024-06-10 10:18:17.617964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:28.333 [2024-06-10 10:18:17.617983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:29:28.333 [2024-06-10 10:18:17.618004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.333 [2024-06-10 10:18:17.618092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.333 [2024-06-10 10:18:17.618110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:28.333 [2024-06-10 10:18:17.618129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:28.334 [2024-06-10 10:18:17.618142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.334 [2024-06-10 10:18:17.618180] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:28.334 [2024-06-10 10:18:17.622498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.334 [2024-06-10 10:18:17.622533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:28.334 [2024-06-10 10:18:17.622549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.326 ms 00:29:28.334 [2024-06-10 10:18:17.622561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.334 [2024-06-10 10:18:17.622601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.334 [2024-06-10 10:18:17.622616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:28.334 [2024-06-10 10:18:17.622629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:28.334 [2024-06-10 10:18:17.622656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.334 [2024-06-10 10:18:17.622717] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:28.334 [2024-06-10 10:18:17.622749] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:28.334 [2024-06-10 10:18:17.622794] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:28.334 [2024-06-10 10:18:17.622815] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:29:28.334 [2024-06-10 10:18:17.622922] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:28.334 [2024-06-10 10:18:17.622937] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:28.334 [2024-06-10 10:18:17.622951] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:29:28.334 [2024-06-10 10:18:17.622971] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:28.334 [2024-06-10 10:18:17.622985] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:28.334 [2024-06-10 10:18:17.622997] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:28.334 [2024-06-10 10:18:17.623008] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:28.334 [2024-06-10 10:18:17.623019] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:28.334 [2024-06-10 10:18:17.623030] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:28.334 [2024-06-10 10:18:17.623042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.334 [2024-06-10 10:18:17.623053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:28.334 [2024-06-10 10:18:17.623065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.330 ms 00:29:28.334 [2024-06-10 10:18:17.623076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.334 [2024-06-10 10:18:17.623178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.334 [2024-06-10 10:18:17.623204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:28.334 [2024-06-10 10:18:17.623228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:29:28.334 [2024-06-10 10:18:17.623241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.334 [2024-06-10 10:18:17.623386] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:28.334 [2024-06-10 10:18:17.623404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:28.334 [2024-06-10 10:18:17.623417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:28.334 [2024-06-10 10:18:17.623429] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:28.334 [2024-06-10 10:18:17.623441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:28.334 [2024-06-10 10:18:17.623452] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:28.334 [2024-06-10 10:18:17.623465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:28.334 [2024-06-10 10:18:17.623475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:28.334 [2024-06-10 10:18:17.623486] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:28.334 [2024-06-10 10:18:17.623496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:28.334 [2024-06-10 10:18:17.623506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:28.334 [2024-06-10 10:18:17.623516] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:28.334 [2024-06-10 10:18:17.623527] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:28.334 [2024-06-10 10:18:17.623537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:28.334 [2024-06-10 10:18:17.623547] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:28.334 [2024-06-10 10:18:17.623558] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:28.334 [2024-06-10 10:18:17.623568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:28.334 [2024-06-10 10:18:17.623578] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:28.334 [2024-06-10 10:18:17.623588] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:28.334 [2024-06-10 10:18:17.623599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:28.334 [2024-06-10 10:18:17.623609] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:28.334 [2024-06-10 10:18:17.623619] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:28.334 [2024-06-10 10:18:17.623630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:28.334 [2024-06-10 10:18:17.623655] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:28.334 [2024-06-10 10:18:17.623669] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:28.334 [2024-06-10 10:18:17.623679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:28.334 [2024-06-10 10:18:17.623690] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:28.334 [2024-06-10 10:18:17.623700] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:28.334 [2024-06-10 10:18:17.623710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:28.334 [2024-06-10 10:18:17.623724] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:28.334 [2024-06-10 10:18:17.623734] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:28.334 [2024-06-10 10:18:17.623744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:28.334 [2024-06-10 10:18:17.623755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:28.334 [2024-06-10 10:18:17.623765] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:28.334 [2024-06-10 10:18:17.623777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:28.334 [2024-06-10 10:18:17.623788] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:28.334 [2024-06-10 10:18:17.623798] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:28.334 [2024-06-10 10:18:17.623808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:28.334 [2024-06-10 10:18:17.623820] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:28.334 [2024-06-10 10:18:17.623831] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:28.334 [2024-06-10 10:18:17.623841] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:28.334 [2024-06-10 10:18:17.623851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:28.334 [2024-06-10 10:18:17.623862] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:28.334 [2024-06-10 10:18:17.623871] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:28.334 [2024-06-10 10:18:17.623882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:28.334 [2024-06-10 10:18:17.623898] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:28.334 [2024-06-10 10:18:17.623910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:28.334 [2024-06-10 10:18:17.623921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:28.334 [2024-06-10 10:18:17.623932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:28.334 [2024-06-10 10:18:17.623950] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:28.334 [2024-06-10 10:18:17.623969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:28.334 [2024-06-10 10:18:17.623987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:28.334 [2024-06-10 10:18:17.624004] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:28.334 [2024-06-10 10:18:17.624032] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:28.334 [2024-06-10 10:18:17.624048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:28.334 [2024-06-10 10:18:17.624060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:28.334 [2024-06-10 10:18:17.624071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:28.334 [2024-06-10 10:18:17.624082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:28.334 [2024-06-10 10:18:17.624094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:28.334 [2024-06-10 10:18:17.624105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:28.334 [2024-06-10 10:18:17.624116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:28.334 [2024-06-10 10:18:17.624127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:28.334 [2024-06-10 10:18:17.624138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:28.335 [2024-06-10 10:18:17.624149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:28.335 [2024-06-10 10:18:17.624160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:28.335 [2024-06-10 10:18:17.624172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:28.335 [2024-06-10 10:18:17.624184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:28.335 [2024-06-10 10:18:17.624195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:28.335 [2024-06-10 10:18:17.624207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:28.335 [2024-06-10 10:18:17.624218] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:28.335 [2024-06-10 10:18:17.624232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:28.335 [2024-06-10 10:18:17.624244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:28.335 [2024-06-10 10:18:17.624255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:28.335 [2024-06-10 10:18:17.624267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:28.335 [2024-06-10 10:18:17.624278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:28.335 [2024-06-10 10:18:17.624291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.335 [2024-06-10 10:18:17.624302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:28.335 [2024-06-10 10:18:17.624314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.971 ms 00:29:28.335 [2024-06-10 10:18:17.624326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.335 [2024-06-10 10:18:17.624398] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:28.335 [2024-06-10 10:18:17.624415] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:30.233 [2024-06-10 10:18:19.629049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.233 [2024-06-10 10:18:19.629119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:30.233 [2024-06-10 10:18:19.629142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2004.664 ms 00:29:30.233 [2024-06-10 10:18:19.629154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.233 [2024-06-10 10:18:19.661588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.233 [2024-06-10 10:18:19.661662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:30.233 [2024-06-10 10:18:19.661684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.136 ms 00:29:30.233 [2024-06-10 10:18:19.661696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.233 [2024-06-10 10:18:19.661847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.233 [2024-06-10 10:18:19.661867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:30.233 [2024-06-10 10:18:19.661880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:30.233 [2024-06-10 10:18:19.661891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.233 [2024-06-10 10:18:19.700803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.233 [2024-06-10 10:18:19.700868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:30.233 [2024-06-10 10:18:19.700895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.854 ms 00:29:30.233 [2024-06-10 10:18:19.700907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.233 [2024-06-10 10:18:19.700992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.233 [2024-06-10 10:18:19.701008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:30.233 [2024-06-10 10:18:19.701021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:30.233 [2024-06-10 10:18:19.701032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.233 [2024-06-10 10:18:19.701413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.233 [2024-06-10 10:18:19.701440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:30.233 [2024-06-10 10:18:19.701455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.299 ms 00:29:30.233 [2024-06-10 10:18:19.701472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.233 [2024-06-10 10:18:19.701533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.233 [2024-06-10 10:18:19.701548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:30.233 [2024-06-10 10:18:19.701560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:29:30.233 [2024-06-10 10:18:19.701571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.233 [2024-06-10 10:18:19.720969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.233 [2024-06-10 10:18:19.721034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:30.233 [2024-06-10 10:18:19.721055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.367 ms 00:29:30.233 [2024-06-10 10:18:19.721068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.233 [2024-06-10 10:18:19.737925] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:30.233 [2024-06-10 10:18:19.738003] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:30.233 [2024-06-10 10:18:19.738026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.233 [2024-06-10 10:18:19.738038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:30.233 [2024-06-10 10:18:19.738054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.776 ms 00:29:30.233 [2024-06-10 10:18:19.738065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.493 [2024-06-10 10:18:19.756454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.493 [2024-06-10 10:18:19.756514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:30.493 [2024-06-10 10:18:19.756532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.314 ms 00:29:30.493 [2024-06-10 10:18:19.756544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.493 [2024-06-10 10:18:19.772334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.493 [2024-06-10 10:18:19.772386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:30.493 [2024-06-10 10:18:19.772405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.709 ms 00:29:30.493 [2024-06-10 10:18:19.772418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.493 [2024-06-10 10:18:19.788071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.493 [2024-06-10 10:18:19.788120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:30.493 [2024-06-10 10:18:19.788137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.590 ms 00:29:30.493 [2024-06-10 10:18:19.788149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.493 [2024-06-10 10:18:19.788981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.493 [2024-06-10 10:18:19.789016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:30.493 [2024-06-10 10:18:19.789031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.689 ms 00:29:30.493 [2024-06-10 10:18:19.789044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.493 [2024-06-10 10:18:19.867811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.493 [2024-06-10 10:18:19.867880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:30.493 [2024-06-10 10:18:19.867901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 78.735 ms 00:29:30.493 [2024-06-10 10:18:19.867914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.493 [2024-06-10 10:18:19.880696] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:30.493 [2024-06-10 10:18:19.881534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.494 [2024-06-10 10:18:19.881566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:30.494 [2024-06-10 10:18:19.881584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.536 ms 00:29:30.494 [2024-06-10 10:18:19.881596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.494 [2024-06-10 10:18:19.881745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.494 [2024-06-10 10:18:19.881768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:30.494 [2024-06-10 10:18:19.881781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:30.494 [2024-06-10 10:18:19.881793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.494 [2024-06-10 10:18:19.881875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.494 [2024-06-10 10:18:19.881894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:30.494 [2024-06-10 10:18:19.881907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:29:30.494 [2024-06-10 10:18:19.881918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.494 [2024-06-10 10:18:19.881952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.494 [2024-06-10 10:18:19.881973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:30.494 [2024-06-10 10:18:19.881985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:30.494 [2024-06-10 10:18:19.881996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.494 [2024-06-10 10:18:19.882034] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:30.494 [2024-06-10 10:18:19.882049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.494 [2024-06-10 10:18:19.882061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:30.494 [2024-06-10 10:18:19.882072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:30.494 [2024-06-10 10:18:19.882084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.494 [2024-06-10 10:18:19.913175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.494 [2024-06-10 10:18:19.913225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:30.494 [2024-06-10 10:18:19.913244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.059 ms 00:29:30.494 [2024-06-10 10:18:19.913257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.494 [2024-06-10 10:18:19.913349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.494 [2024-06-10 10:18:19.913368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:30.494 [2024-06-10 10:18:19.913380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:29:30.494 [2024-06-10 10:18:19.913391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.494 [2024-06-10 10:18:19.914737] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2322.002 ms, result 0 00:29:30.494 [2024-06-10 10:18:19.929601] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.494 [2024-06-10 10:18:19.945678] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:30.494 [2024-06-10 10:18:19.954616] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:31.488 10:18:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:31.488 10:18:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@863 -- # return 0 00:29:31.488 10:18:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:31.488 10:18:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:31.488 10:18:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:31.762 [2024-06-10 10:18:21.047828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.762 [2024-06-10 10:18:21.047892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:31.762 [2024-06-10 10:18:21.047914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:31.762 [2024-06-10 10:18:21.047927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.762 [2024-06-10 10:18:21.047964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.762 [2024-06-10 10:18:21.047986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:31.762 [2024-06-10 10:18:21.047998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:31.762 [2024-06-10 10:18:21.048010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.762 [2024-06-10 10:18:21.048038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.762 [2024-06-10 10:18:21.048051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:31.762 [2024-06-10 10:18:21.048063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:31.762 [2024-06-10 10:18:21.048074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.762 [2024-06-10 10:18:21.048152] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.312 ms, result 0 00:29:31.762 true 00:29:31.762 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:32.021 { 00:29:32.021 "name": "ftl", 00:29:32.021 "properties": [ 00:29:32.021 { 00:29:32.021 "name": "superblock_version", 00:29:32.021 "value": 5, 00:29:32.021 "read-only": true 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "name": "base_device", 00:29:32.021 "bands": [ 00:29:32.021 { 00:29:32.021 "id": 0, 00:29:32.021 "state": "CLOSED", 00:29:32.021 "validity": 1.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 1, 00:29:32.021 "state": "CLOSED", 00:29:32.021 "validity": 1.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 2, 00:29:32.021 "state": "CLOSED", 00:29:32.021 "validity": 0.007843137254901933 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 3, 00:29:32.021 "state": "FREE", 00:29:32.021 "validity": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 4, 00:29:32.021 "state": "FREE", 00:29:32.021 "validity": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 5, 00:29:32.021 "state": "FREE", 00:29:32.021 "validity": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 6, 00:29:32.021 "state": "FREE", 00:29:32.021 "validity": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 7, 00:29:32.021 "state": "FREE", 00:29:32.021 "validity": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 8, 00:29:32.021 "state": "FREE", 00:29:32.021 "validity": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 9, 00:29:32.021 "state": "FREE", 00:29:32.021 "validity": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 10, 00:29:32.021 "state": "FREE", 00:29:32.021 "validity": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 11, 00:29:32.021 "state": "FREE", 00:29:32.021 "validity": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 12, 00:29:32.021 "state": "FREE", 00:29:32.021 "validity": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 13, 00:29:32.021 "state": "FREE", 00:29:32.021 "validity": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 14, 00:29:32.021 "state": "FREE", 00:29:32.021 "validity": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 15, 00:29:32.021 "state": "FREE", 00:29:32.021 "validity": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 16, 00:29:32.021 "state": "FREE", 00:29:32.021 "validity": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 17, 00:29:32.021 "state": "FREE", 00:29:32.021 "validity": 0.0 00:29:32.021 } 00:29:32.021 ], 00:29:32.021 "read-only": true 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "name": "cache_device", 00:29:32.021 "type": "bdev", 00:29:32.021 "chunks": [ 00:29:32.021 { 00:29:32.021 "id": 0, 00:29:32.021 "state": "INACTIVE", 00:29:32.021 "utilization": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 1, 00:29:32.021 "state": "OPEN", 00:29:32.021 "utilization": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 2, 00:29:32.021 "state": "OPEN", 00:29:32.021 "utilization": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 3, 00:29:32.021 "state": "FREE", 00:29:32.021 "utilization": 0.0 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "id": 4, 00:29:32.021 "state": "FREE", 00:29:32.021 "utilization": 0.0 00:29:32.021 } 00:29:32.021 ], 00:29:32.021 "read-only": true 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "name": "verbose_mode", 00:29:32.021 "value": true, 00:29:32.021 "unit": "", 00:29:32.021 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "name": "prep_upgrade_on_shutdown", 00:29:32.021 "value": false, 00:29:32.021 "unit": "", 00:29:32.021 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:32.021 } 00:29:32.021 ] 00:29:32.021 } 00:29:32.021 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:32.021 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:32.021 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:32.279 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:32.279 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:32.279 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:32.279 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:32.279 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:32.537 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:32.537 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:32.537 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:32.538 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:32.538 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:32.538 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:32.538 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:32.538 Validate MD5 checksum, iteration 1 00:29:32.538 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:32.538 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:32.538 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:32.538 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:32.538 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:32.538 10:18:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:32.538 [2024-06-10 10:18:21.954432] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:29:32.538 [2024-06-10 10:18:21.954577] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86345 ] 00:29:32.794 [2024-06-10 10:18:22.122969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.051 [2024-06-10 10:18:22.360694] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.988  Copying: 463/1024 [MB] (463 MBps) Copying: 863/1024 [MB] (400 MBps) Copying: 1024/1024 [MB] (average 426 MBps) 00:29:37.988 00:29:37.988 10:18:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:37.988 10:18:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:39.893 10:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:39.893 10:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=73b11a823660032fbc8f9b2821714709 00:29:39.893 10:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 73b11a823660032fbc8f9b2821714709 != \7\3\b\1\1\a\8\2\3\6\6\0\0\3\2\f\b\c\8\f\9\b\2\8\2\1\7\1\4\7\0\9 ]] 00:29:39.893 10:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:39.893 10:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:39.893 Validate MD5 checksum, iteration 2 00:29:39.893 10:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:39.893 10:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:39.893 10:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:39.893 10:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:39.893 10:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:39.893 10:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:39.893 10:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:40.152 [2024-06-10 10:18:29.422043] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:29:40.152 [2024-06-10 10:18:29.422183] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86426 ] 00:29:40.152 [2024-06-10 10:18:29.588932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.411 [2024-06-10 10:18:29.825305] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.064  Copying: 397/1024 [MB] (397 MBps) Copying: 793/1024 [MB] (396 MBps) Copying: 1024/1024 [MB] (average 405 MBps) 00:29:46.064 00:29:46.323 10:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:46.323 10:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ff9106411f7173e38ffe467bc51ddfa8 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ff9106411f7173e38ffe467bc51ddfa8 != \f\f\9\1\0\6\4\1\1\f\7\1\7\3\e\3\8\f\f\e\4\6\7\b\c\5\1\d\d\f\a\8 ]] 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 86272 ]] 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 86272 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86510 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86510 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@830 -- # '[' -z 86510 ']' 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:48.855 10:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:48.855 [2024-06-10 10:18:38.020809] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:29:48.855 [2024-06-10 10:18:38.021028] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86510 ] 00:29:48.855 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 829: 86272 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:48.855 [2024-06-10 10:18:38.197005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.114 [2024-06-10 10:18:38.387835] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.681 [2024-06-10 10:18:39.194039] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:49.681 [2024-06-10 10:18:39.194117] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:49.941 [2024-06-10 10:18:39.335034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.941 [2024-06-10 10:18:39.335099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:49.941 [2024-06-10 10:18:39.335119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:49.941 [2024-06-10 10:18:39.335137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.941 [2024-06-10 10:18:39.335232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.941 [2024-06-10 10:18:39.335257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:49.941 [2024-06-10 10:18:39.335271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:29:49.941 [2024-06-10 10:18:39.335282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.941 [2024-06-10 10:18:39.335320] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:49.941 [2024-06-10 10:18:39.336303] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:49.941 [2024-06-10 10:18:39.336339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.941 [2024-06-10 10:18:39.336353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:49.941 [2024-06-10 10:18:39.336366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.027 ms 00:29:49.941 [2024-06-10 10:18:39.336377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.941 [2024-06-10 10:18:39.336908] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:49.941 [2024-06-10 10:18:39.357652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.941 [2024-06-10 10:18:39.357714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:49.941 [2024-06-10 10:18:39.357734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.745 ms 00:29:49.941 [2024-06-10 10:18:39.357746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.941 [2024-06-10 10:18:39.370589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.941 [2024-06-10 10:18:39.370671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:49.941 [2024-06-10 10:18:39.370704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:49.941 [2024-06-10 10:18:39.370727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.941 [2024-06-10 10:18:39.371364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.941 [2024-06-10 10:18:39.371400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:49.941 [2024-06-10 10:18:39.371434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.460 ms 00:29:49.941 [2024-06-10 10:18:39.371466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.941 [2024-06-10 10:18:39.371577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.941 [2024-06-10 10:18:39.371603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:49.941 [2024-06-10 10:18:39.371617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:29:49.941 [2024-06-10 10:18:39.371628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.941 [2024-06-10 10:18:39.371694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.941 [2024-06-10 10:18:39.371713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:49.941 [2024-06-10 10:18:39.371726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:49.941 [2024-06-10 10:18:39.371737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.941 [2024-06-10 10:18:39.371780] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:49.941 [2024-06-10 10:18:39.375876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.941 [2024-06-10 10:18:39.375912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:49.941 [2024-06-10 10:18:39.375933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.106 ms 00:29:49.941 [2024-06-10 10:18:39.375944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.941 [2024-06-10 10:18:39.375984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.941 [2024-06-10 10:18:39.376000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:49.941 [2024-06-10 10:18:39.376013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:49.941 [2024-06-10 10:18:39.376024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.941 [2024-06-10 10:18:39.376074] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:49.941 [2024-06-10 10:18:39.376107] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:49.941 [2024-06-10 10:18:39.376151] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:49.941 [2024-06-10 10:18:39.376177] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:29:49.941 [2024-06-10 10:18:39.376284] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:49.941 [2024-06-10 10:18:39.376299] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:49.941 [2024-06-10 10:18:39.376314] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:29:49.941 [2024-06-10 10:18:39.376328] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:49.941 [2024-06-10 10:18:39.376342] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:49.941 [2024-06-10 10:18:39.376355] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:49.941 [2024-06-10 10:18:39.376367] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:49.941 [2024-06-10 10:18:39.376378] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:49.941 [2024-06-10 10:18:39.376393] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:49.941 [2024-06-10 10:18:39.376405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.941 [2024-06-10 10:18:39.376416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:49.941 [2024-06-10 10:18:39.376428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.334 ms 00:29:49.941 [2024-06-10 10:18:39.376439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.941 [2024-06-10 10:18:39.376534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.941 [2024-06-10 10:18:39.376553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:49.941 [2024-06-10 10:18:39.376565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:29:49.941 [2024-06-10 10:18:39.376576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.941 [2024-06-10 10:18:39.376710] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:49.941 [2024-06-10 10:18:39.376736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:49.941 [2024-06-10 10:18:39.376749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:49.941 [2024-06-10 10:18:39.376762] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.941 [2024-06-10 10:18:39.376774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:49.941 [2024-06-10 10:18:39.376785] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:49.941 [2024-06-10 10:18:39.376796] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:49.941 [2024-06-10 10:18:39.376807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:49.941 [2024-06-10 10:18:39.376817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:49.941 [2024-06-10 10:18:39.376828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.941 [2024-06-10 10:18:39.376838] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:49.941 [2024-06-10 10:18:39.376848] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:49.941 [2024-06-10 10:18:39.376859] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.941 [2024-06-10 10:18:39.376869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:49.941 [2024-06-10 10:18:39.376880] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:49.941 [2024-06-10 10:18:39.376890] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.941 [2024-06-10 10:18:39.376900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:49.941 [2024-06-10 10:18:39.376911] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:49.941 [2024-06-10 10:18:39.376921] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.941 [2024-06-10 10:18:39.376931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:49.941 [2024-06-10 10:18:39.376942] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:49.941 [2024-06-10 10:18:39.376952] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:49.941 [2024-06-10 10:18:39.376963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:49.941 [2024-06-10 10:18:39.376973] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:49.941 [2024-06-10 10:18:39.376984] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:49.941 [2024-06-10 10:18:39.376994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:49.941 [2024-06-10 10:18:39.377005] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:49.941 [2024-06-10 10:18:39.377015] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:49.941 [2024-06-10 10:18:39.377025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:49.942 [2024-06-10 10:18:39.377035] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:49.942 [2024-06-10 10:18:39.377046] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:49.942 [2024-06-10 10:18:39.377056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:49.942 [2024-06-10 10:18:39.377066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:49.942 [2024-06-10 10:18:39.377076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.942 [2024-06-10 10:18:39.377086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:49.942 [2024-06-10 10:18:39.377097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:49.942 [2024-06-10 10:18:39.377107] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.942 [2024-06-10 10:18:39.377121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:49.942 [2024-06-10 10:18:39.377132] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:49.942 [2024-06-10 10:18:39.377142] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.942 [2024-06-10 10:18:39.377152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:49.942 [2024-06-10 10:18:39.377163] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:49.942 [2024-06-10 10:18:39.377173] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.942 [2024-06-10 10:18:39.377183] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:49.942 [2024-06-10 10:18:39.377194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:49.942 [2024-06-10 10:18:39.377205] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:49.942 [2024-06-10 10:18:39.377216] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.942 [2024-06-10 10:18:39.377228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:49.942 [2024-06-10 10:18:39.377239] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:49.942 [2024-06-10 10:18:39.377249] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:49.942 [2024-06-10 10:18:39.377259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:49.942 [2024-06-10 10:18:39.377284] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:49.942 [2024-06-10 10:18:39.377296] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:49.942 [2024-06-10 10:18:39.377309] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:49.942 [2024-06-10 10:18:39.377324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:49.942 [2024-06-10 10:18:39.377341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:49.942 [2024-06-10 10:18:39.377353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:49.942 [2024-06-10 10:18:39.377364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:49.942 [2024-06-10 10:18:39.377376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:49.942 [2024-06-10 10:18:39.377387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:49.942 [2024-06-10 10:18:39.377399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:49.942 [2024-06-10 10:18:39.377410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:49.942 [2024-06-10 10:18:39.377422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:49.942 [2024-06-10 10:18:39.377433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:49.942 [2024-06-10 10:18:39.377444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:49.942 [2024-06-10 10:18:39.377456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:49.942 [2024-06-10 10:18:39.377467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:49.942 [2024-06-10 10:18:39.377480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:49.942 [2024-06-10 10:18:39.377492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:49.942 [2024-06-10 10:18:39.377504] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:49.942 [2024-06-10 10:18:39.377516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:49.942 [2024-06-10 10:18:39.377529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:49.942 [2024-06-10 10:18:39.377541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:49.942 [2024-06-10 10:18:39.377552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:49.942 [2024-06-10 10:18:39.377564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:49.942 [2024-06-10 10:18:39.377576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.942 [2024-06-10 10:18:39.377588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:49.942 [2024-06-10 10:18:39.377600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.948 ms 00:29:49.942 [2024-06-10 10:18:39.377612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.942 [2024-06-10 10:18:39.408934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.942 [2024-06-10 10:18:39.408986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:49.942 [2024-06-10 10:18:39.409007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.236 ms 00:29:49.942 [2024-06-10 10:18:39.409019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.942 [2024-06-10 10:18:39.409091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.942 [2024-06-10 10:18:39.409107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:49.942 [2024-06-10 10:18:39.409120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:49.942 [2024-06-10 10:18:39.409132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.942 [2024-06-10 10:18:39.447889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.942 [2024-06-10 10:18:39.447943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:49.942 [2024-06-10 10:18:39.447962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.660 ms 00:29:49.942 [2024-06-10 10:18:39.447975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.942 [2024-06-10 10:18:39.448048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.942 [2024-06-10 10:18:39.448064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:49.942 [2024-06-10 10:18:39.448084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:49.942 [2024-06-10 10:18:39.448095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.942 [2024-06-10 10:18:39.448244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.942 [2024-06-10 10:18:39.448264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:49.942 [2024-06-10 10:18:39.448277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:29:49.942 [2024-06-10 10:18:39.448288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.942 [2024-06-10 10:18:39.448345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.942 [2024-06-10 10:18:39.448361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:49.942 [2024-06-10 10:18:39.448374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:49.942 [2024-06-10 10:18:39.448391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.201 [2024-06-10 10:18:39.465917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.201 [2024-06-10 10:18:39.465967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:50.201 [2024-06-10 10:18:39.465991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.495 ms 00:29:50.201 [2024-06-10 10:18:39.466003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.201 [2024-06-10 10:18:39.466185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.201 [2024-06-10 10:18:39.466219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:50.201 [2024-06-10 10:18:39.466235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:50.201 [2024-06-10 10:18:39.466246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.201 [2024-06-10 10:18:39.497821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.201 [2024-06-10 10:18:39.497919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:50.201 [2024-06-10 10:18:39.497943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.536 ms 00:29:50.201 [2024-06-10 10:18:39.497955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.201 [2024-06-10 10:18:39.511003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.201 [2024-06-10 10:18:39.511087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:50.201 [2024-06-10 10:18:39.511107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.688 ms 00:29:50.201 [2024-06-10 10:18:39.511119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.201 [2024-06-10 10:18:39.585648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.201 [2024-06-10 10:18:39.585716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:50.201 [2024-06-10 10:18:39.585737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 74.403 ms 00:29:50.201 [2024-06-10 10:18:39.585749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.201 [2024-06-10 10:18:39.585978] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:50.201 [2024-06-10 10:18:39.586131] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:50.201 [2024-06-10 10:18:39.586274] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:50.201 [2024-06-10 10:18:39.586412] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:50.201 [2024-06-10 10:18:39.586426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.201 [2024-06-10 10:18:39.586438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:50.201 [2024-06-10 10:18:39.586451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.595 ms 00:29:50.201 [2024-06-10 10:18:39.586463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.201 [2024-06-10 10:18:39.586564] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:50.201 [2024-06-10 10:18:39.586594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.201 [2024-06-10 10:18:39.586605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:50.201 [2024-06-10 10:18:39.586619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:29:50.201 [2024-06-10 10:18:39.586630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.201 [2024-06-10 10:18:39.606108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.201 [2024-06-10 10:18:39.606166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:50.201 [2024-06-10 10:18:39.606186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.427 ms 00:29:50.201 [2024-06-10 10:18:39.606198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.201 [2024-06-10 10:18:39.617946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.201 [2024-06-10 10:18:39.617989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:50.201 [2024-06-10 10:18:39.618006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:50.201 [2024-06-10 10:18:39.618018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.201 [2024-06-10 10:18:39.618233] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:50.768 [2024-06-10 10:18:40.104844] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:50.768 [2024-06-10 10:18:40.105065] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:51.336 [2024-06-10 10:18:40.592226] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:51.336 [2024-06-10 10:18:40.592359] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:51.336 [2024-06-10 10:18:40.592383] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:51.336 [2024-06-10 10:18:40.592409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.336 [2024-06-10 10:18:40.592422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:51.336 [2024-06-10 10:18:40.592438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 974.307 ms 00:29:51.336 [2024-06-10 10:18:40.592451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.336 [2024-06-10 10:18:40.592502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.336 [2024-06-10 10:18:40.592518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:51.336 [2024-06-10 10:18:40.592530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:51.336 [2024-06-10 10:18:40.592541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.336 [2024-06-10 10:18:40.605571] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:51.336 [2024-06-10 10:18:40.605765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.336 [2024-06-10 10:18:40.605793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:51.336 [2024-06-10 10:18:40.605819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.199 ms 00:29:51.336 [2024-06-10 10:18:40.605830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.336 [2024-06-10 10:18:40.606625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.336 [2024-06-10 10:18:40.606669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:51.336 [2024-06-10 10:18:40.606685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.655 ms 00:29:51.336 [2024-06-10 10:18:40.606698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.336 [2024-06-10 10:18:40.609268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.336 [2024-06-10 10:18:40.609300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:51.336 [2024-06-10 10:18:40.609330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.533 ms 00:29:51.336 [2024-06-10 10:18:40.609349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.336 [2024-06-10 10:18:40.609430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.336 [2024-06-10 10:18:40.609451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:51.336 [2024-06-10 10:18:40.609465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:51.336 [2024-06-10 10:18:40.609476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.336 [2024-06-10 10:18:40.609609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.336 [2024-06-10 10:18:40.609628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:51.336 [2024-06-10 10:18:40.609655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:51.336 [2024-06-10 10:18:40.609675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.336 [2024-06-10 10:18:40.609709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.336 [2024-06-10 10:18:40.609724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:51.336 [2024-06-10 10:18:40.609741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:51.336 [2024-06-10 10:18:40.609752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.336 [2024-06-10 10:18:40.609795] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:51.336 [2024-06-10 10:18:40.609813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.336 [2024-06-10 10:18:40.609833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:51.336 [2024-06-10 10:18:40.609851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:51.336 [2024-06-10 10:18:40.609863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.336 [2024-06-10 10:18:40.609932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.336 [2024-06-10 10:18:40.609948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:51.336 [2024-06-10 10:18:40.609960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:29:51.336 [2024-06-10 10:18:40.609971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.336 [2024-06-10 10:18:40.611196] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1275.619 ms, result 0 00:29:51.336 [2024-06-10 10:18:40.626228] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.336 [2024-06-10 10:18:40.642245] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:51.336 [2024-06-10 10:18:40.651449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:51.336 10:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:51.336 10:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@863 -- # return 0 00:29:51.336 10:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:51.336 10:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:51.336 10:18:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:51.336 10:18:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:51.336 10:18:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:51.336 10:18:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:51.336 Validate MD5 checksum, iteration 1 00:29:51.336 10:18:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:51.336 10:18:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:51.336 10:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:51.336 10:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:51.336 10:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:51.336 10:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:51.336 10:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:51.594 [2024-06-10 10:18:40.879386] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:29:51.594 [2024-06-10 10:18:40.879557] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86545 ] 00:29:51.595 [2024-06-10 10:18:41.053152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.852 [2024-06-10 10:18:41.292178] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.844  Copying: 453/1024 [MB] (453 MBps) Copying: 865/1024 [MB] (412 MBps) Copying: 1024/1024 [MB] (average 432 MBps) 00:29:56.844 00:29:56.844 10:18:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:56.844 10:18:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:59.373 10:18:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:59.373 Validate MD5 checksum, iteration 2 00:29:59.373 10:18:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=73b11a823660032fbc8f9b2821714709 00:29:59.373 10:18:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 73b11a823660032fbc8f9b2821714709 != \7\3\b\1\1\a\8\2\3\6\6\0\0\3\2\f\b\c\8\f\9\b\2\8\2\1\7\1\4\7\0\9 ]] 00:29:59.373 10:18:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:59.373 10:18:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:59.373 10:18:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:59.373 10:18:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:59.373 10:18:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:59.373 10:18:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:59.373 10:18:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:59.373 10:18:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:59.373 10:18:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:59.373 [2024-06-10 10:18:48.451278] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:29:59.373 [2024-06-10 10:18:48.451449] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86618 ] 00:29:59.373 [2024-06-10 10:18:48.625619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.373 [2024-06-10 10:18:48.850798] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.840  Copying: 441/1024 [MB] (441 MBps) Copying: 902/1024 [MB] (461 MBps) Copying: 1024/1024 [MB] (average 442 MBps) 00:30:03.840 00:30:03.840 10:18:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:03.840 10:18:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:06.411 10:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:06.411 10:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ff9106411f7173e38ffe467bc51ddfa8 00:30:06.411 10:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ff9106411f7173e38ffe467bc51ddfa8 != \f\f\9\1\0\6\4\1\1\f\7\1\7\3\e\3\8\f\f\e\4\6\7\b\c\5\1\d\d\f\a\8 ]] 00:30:06.411 10:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:06.411 10:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:06.411 10:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:06.411 10:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:06.411 10:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:06.412 10:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:06.412 10:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:06.412 10:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:06.412 10:18:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:06.412 10:18:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:06.412 10:18:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86510 ]] 00:30:06.412 10:18:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86510 00:30:06.412 10:18:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@949 -- # '[' -z 86510 ']' 00:30:06.412 10:18:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # kill -0 86510 00:30:06.412 10:18:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # uname 00:30:06.412 10:18:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:06.412 10:18:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 86510 00:30:06.412 killing process with pid 86510 00:30:06.412 10:18:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:06.412 10:18:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:06.412 10:18:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # echo 'killing process with pid 86510' 00:30:06.412 10:18:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # kill 86510 00:30:06.412 10:18:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # wait 86510 00:30:06.976 [2024-06-10 10:18:56.488019] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:07.234 [2024-06-10 10:18:56.505134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.234 [2024-06-10 10:18:56.505191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:07.234 [2024-06-10 10:18:56.505213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:07.234 [2024-06-10 10:18:56.505226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.234 [2024-06-10 10:18:56.505258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:07.234 [2024-06-10 10:18:56.508596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.234 [2024-06-10 10:18:56.508631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:07.234 [2024-06-10 10:18:56.508659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.316 ms 00:30:07.234 [2024-06-10 10:18:56.508672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.234 [2024-06-10 10:18:56.508938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.234 [2024-06-10 10:18:56.508968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:07.234 [2024-06-10 10:18:56.508983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.236 ms 00:30:07.234 [2024-06-10 10:18:56.509003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.234 [2024-06-10 10:18:56.510259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.234 [2024-06-10 10:18:56.510301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:07.234 [2024-06-10 10:18:56.510317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.233 ms 00:30:07.234 [2024-06-10 10:18:56.510329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.234 [2024-06-10 10:18:56.511583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.234 [2024-06-10 10:18:56.511619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:07.234 [2024-06-10 10:18:56.511635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.211 ms 00:30:07.234 [2024-06-10 10:18:56.511659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.234 [2024-06-10 10:18:56.524262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.234 [2024-06-10 10:18:56.524309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:07.234 [2024-06-10 10:18:56.524327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.543 ms 00:30:07.234 [2024-06-10 10:18:56.524339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.234 [2024-06-10 10:18:56.531256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.234 [2024-06-10 10:18:56.531307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:07.234 [2024-06-10 10:18:56.531326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.865 ms 00:30:07.234 [2024-06-10 10:18:56.531347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.234 [2024-06-10 10:18:56.531444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.234 [2024-06-10 10:18:56.531467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:07.234 [2024-06-10 10:18:56.531480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:30:07.235 [2024-06-10 10:18:56.531493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.235 [2024-06-10 10:18:56.543909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.235 [2024-06-10 10:18:56.543954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:30:07.235 [2024-06-10 10:18:56.543970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.393 ms 00:30:07.235 [2024-06-10 10:18:56.543981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.235 [2024-06-10 10:18:56.556500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.235 [2024-06-10 10:18:56.556545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:30:07.235 [2024-06-10 10:18:56.556563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.472 ms 00:30:07.235 [2024-06-10 10:18:56.556574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.235 [2024-06-10 10:18:56.568724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.235 [2024-06-10 10:18:56.568772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:07.235 [2024-06-10 10:18:56.568790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.103 ms 00:30:07.235 [2024-06-10 10:18:56.568802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.235 [2024-06-10 10:18:56.581057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.235 [2024-06-10 10:18:56.581099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:07.235 [2024-06-10 10:18:56.581116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.172 ms 00:30:07.235 [2024-06-10 10:18:56.581127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.235 [2024-06-10 10:18:56.581172] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:07.235 [2024-06-10 10:18:56.581198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:07.235 [2024-06-10 10:18:56.581213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:07.235 [2024-06-10 10:18:56.581225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:07.235 [2024-06-10 10:18:56.581238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:07.235 [2024-06-10 10:18:56.581251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:07.235 [2024-06-10 10:18:56.581262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:07.235 [2024-06-10 10:18:56.581274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:07.235 [2024-06-10 10:18:56.581285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:07.235 [2024-06-10 10:18:56.581297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:07.235 [2024-06-10 10:18:56.581309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:07.235 [2024-06-10 10:18:56.581321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:07.235 [2024-06-10 10:18:56.581333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:07.235 [2024-06-10 10:18:56.581345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:07.235 [2024-06-10 10:18:56.581357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:07.235 [2024-06-10 10:18:56.581368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:07.235 [2024-06-10 10:18:56.581380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:07.235 [2024-06-10 10:18:56.581391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:07.235 [2024-06-10 10:18:56.581404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:07.235 [2024-06-10 10:18:56.581418] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:07.235 [2024-06-10 10:18:56.581448] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 419156a7-f19d-4252-a237-76bd50e15b42 00:30:07.235 [2024-06-10 10:18:56.581460] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:07.235 [2024-06-10 10:18:56.581476] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:07.235 [2024-06-10 10:18:56.581486] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:07.235 [2024-06-10 10:18:56.581501] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:07.235 [2024-06-10 10:18:56.581512] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:07.235 [2024-06-10 10:18:56.581524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:07.235 [2024-06-10 10:18:56.581535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:07.235 [2024-06-10 10:18:56.581545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:07.235 [2024-06-10 10:18:56.581555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:07.235 [2024-06-10 10:18:56.581567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.235 [2024-06-10 10:18:56.581579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:07.235 [2024-06-10 10:18:56.581591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.397 ms 00:30:07.235 [2024-06-10 10:18:56.581602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.235 [2024-06-10 10:18:56.598189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.235 [2024-06-10 10:18:56.598239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:07.235 [2024-06-10 10:18:56.598258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.560 ms 00:30:07.235 [2024-06-10 10:18:56.598270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.235 [2024-06-10 10:18:56.598733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.235 [2024-06-10 10:18:56.598760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:07.235 [2024-06-10 10:18:56.598785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.428 ms 00:30:07.235 [2024-06-10 10:18:56.598797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.235 [2024-06-10 10:18:56.650835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:07.235 [2024-06-10 10:18:56.650893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:07.235 [2024-06-10 10:18:56.650913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:07.235 [2024-06-10 10:18:56.650925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.235 [2024-06-10 10:18:56.650986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:07.235 [2024-06-10 10:18:56.651001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:07.235 [2024-06-10 10:18:56.651013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:07.235 [2024-06-10 10:18:56.651035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.235 [2024-06-10 10:18:56.651161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:07.235 [2024-06-10 10:18:56.651181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:07.235 [2024-06-10 10:18:56.651194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:07.235 [2024-06-10 10:18:56.651206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.235 [2024-06-10 10:18:56.651243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:07.235 [2024-06-10 10:18:56.651259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:07.235 [2024-06-10 10:18:56.651277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:07.235 [2024-06-10 10:18:56.651288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.235 [2024-06-10 10:18:56.750320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:07.235 [2024-06-10 10:18:56.750393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:07.235 [2024-06-10 10:18:56.750414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:07.235 [2024-06-10 10:18:56.750426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.493 [2024-06-10 10:18:56.834931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:07.493 [2024-06-10 10:18:56.835000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:07.493 [2024-06-10 10:18:56.835020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:07.493 [2024-06-10 10:18:56.835032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.493 [2024-06-10 10:18:56.835141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:07.493 [2024-06-10 10:18:56.835170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:07.493 [2024-06-10 10:18:56.835195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:07.493 [2024-06-10 10:18:56.835212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.493 [2024-06-10 10:18:56.835284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:07.493 [2024-06-10 10:18:56.835302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:07.493 [2024-06-10 10:18:56.835314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:07.493 [2024-06-10 10:18:56.835325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.493 [2024-06-10 10:18:56.835448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:07.493 [2024-06-10 10:18:56.835468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:07.493 [2024-06-10 10:18:56.835487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:07.493 [2024-06-10 10:18:56.835499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.493 [2024-06-10 10:18:56.835550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:07.493 [2024-06-10 10:18:56.835578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:07.493 [2024-06-10 10:18:56.835593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:07.493 [2024-06-10 10:18:56.835604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.493 [2024-06-10 10:18:56.835677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:07.493 [2024-06-10 10:18:56.835697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:07.493 [2024-06-10 10:18:56.835715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:07.493 [2024-06-10 10:18:56.835727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.493 [2024-06-10 10:18:56.835783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:07.493 [2024-06-10 10:18:56.835800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:07.493 [2024-06-10 10:18:56.835812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:07.493 [2024-06-10 10:18:56.835823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.493 [2024-06-10 10:18:56.835969] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 330.800 ms, result 0 00:30:08.869 10:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:08.869 10:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:08.869 10:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:08.869 10:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:08.869 10:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:08.869 10:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:08.869 Remove shared memory files 00:30:08.869 10:18:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:08.869 10:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:08.869 10:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:08.869 10:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:08.869 10:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid86272 00:30:08.869 10:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:08.869 10:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:08.869 00:30:08.869 real 1m37.964s 00:30:08.869 user 2m21.744s 00:30:08.869 sys 0m23.162s 00:30:08.869 10:18:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:08.869 10:18:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:08.869 ************************************ 00:30:08.869 END TEST ftl_upgrade_shutdown 00:30:08.869 ************************************ 00:30:08.869 10:18:58 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:30:08.869 10:18:58 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:30:08.869 10:18:58 ftl -- ftl/ftl.sh@14 -- # killprocess 79036 00:30:08.869 10:18:58 ftl -- common/autotest_common.sh@949 -- # '[' -z 79036 ']' 00:30:08.869 10:18:58 ftl -- common/autotest_common.sh@953 -- # kill -0 79036 00:30:08.869 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 953: kill: (79036) - No such process 00:30:08.869 Process with pid 79036 is not found 00:30:08.869 10:18:58 ftl -- common/autotest_common.sh@976 -- # echo 'Process with pid 79036 is not found' 00:30:08.869 10:18:58 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:30:08.869 10:18:58 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=86749 00:30:08.869 10:18:58 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:08.869 10:18:58 ftl -- ftl/ftl.sh@20 -- # waitforlisten 86749 00:30:08.869 10:18:58 ftl -- common/autotest_common.sh@830 -- # '[' -z 86749 ']' 00:30:08.869 10:18:58 ftl -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.869 10:18:58 ftl -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:08.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.869 10:18:58 ftl -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.869 10:18:58 ftl -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:08.869 10:18:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:08.869 [2024-06-10 10:18:58.163841] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:30:08.869 [2024-06-10 10:18:58.164011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86749 ] 00:30:08.869 [2024-06-10 10:18:58.337512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.127 [2024-06-10 10:18:58.566309] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.061 10:18:59 ftl -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:10.061 10:18:59 ftl -- common/autotest_common.sh@863 -- # return 0 00:30:10.061 10:18:59 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:10.318 nvme0n1 00:30:10.318 10:18:59 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:30:10.318 10:18:59 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:10.318 10:18:59 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:10.577 10:18:59 ftl -- ftl/common.sh@28 -- # stores=72f17c32-a956-47a0-8bd1-314036ff935d 00:30:10.577 10:18:59 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:30:10.577 10:18:59 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 72f17c32-a956-47a0-8bd1-314036ff935d 00:30:10.835 10:19:00 ftl -- ftl/ftl.sh@23 -- # killprocess 86749 00:30:10.835 10:19:00 ftl -- common/autotest_common.sh@949 -- # '[' -z 86749 ']' 00:30:10.835 10:19:00 ftl -- common/autotest_common.sh@953 -- # kill -0 86749 00:30:10.835 10:19:00 ftl -- common/autotest_common.sh@954 -- # uname 00:30:10.835 10:19:00 ftl -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:10.835 10:19:00 ftl -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 86749 00:30:10.835 10:19:00 ftl -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:10.835 killing process with pid 86749 00:30:10.835 10:19:00 ftl -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:10.835 10:19:00 ftl -- common/autotest_common.sh@967 -- # echo 'killing process with pid 86749' 00:30:10.835 10:19:00 ftl -- common/autotest_common.sh@968 -- # kill 86749 00:30:10.835 10:19:00 ftl -- common/autotest_common.sh@973 -- # wait 86749 00:30:13.363 10:19:02 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:13.363 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:13.363 Waiting for block devices as requested 00:30:13.363 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:13.363 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:13.363 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:13.621 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:18.973 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:18.973 Remove shared memory files 00:30:18.973 10:19:07 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:30:18.973 10:19:07 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:18.973 10:19:07 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:30:18.973 10:19:07 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:30:18.973 10:19:07 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:30:18.973 10:19:07 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:18.973 10:19:07 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:30:18.973 00:30:18.973 real 11m42.935s 00:30:18.973 user 14m43.880s 00:30:18.973 sys 1m32.269s 00:30:18.973 10:19:07 ftl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:18.973 ************************************ 00:30:18.973 10:19:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:18.973 END TEST ftl 00:30:18.973 ************************************ 00:30:18.973 10:19:08 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:18.973 10:19:08 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:18.973 10:19:08 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:18.973 10:19:08 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:18.973 10:19:08 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:18.973 10:19:08 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:18.973 10:19:08 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:18.973 10:19:08 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:18.973 10:19:08 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:18.973 10:19:08 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:18.973 10:19:08 -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:18.973 10:19:08 -- common/autotest_common.sh@10 -- # set +x 00:30:18.973 10:19:08 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:18.973 10:19:08 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:30:18.973 10:19:08 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:30:18.973 10:19:08 -- common/autotest_common.sh@10 -- # set +x 00:30:19.907 INFO: APP EXITING 00:30:19.907 INFO: killing all VMs 00:30:19.907 INFO: killing vhost app 00:30:19.907 INFO: EXIT DONE 00:30:20.473 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:20.732 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:20.732 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:20.732 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:30:20.732 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:30:20.990 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:21.557 Cleaning 00:30:21.557 Removing: /var/run/dpdk/spdk0/config 00:30:21.557 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:21.557 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:21.557 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:21.557 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:21.557 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:21.557 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:21.557 Removing: /var/run/dpdk/spdk0 00:30:21.557 Removing: /var/run/dpdk/spdk_pid61801 00:30:21.557 Removing: /var/run/dpdk/spdk_pid62017 00:30:21.557 Removing: /var/run/dpdk/spdk_pid62232 00:30:21.557 Removing: /var/run/dpdk/spdk_pid62336 00:30:21.557 Removing: /var/run/dpdk/spdk_pid62387 00:30:21.557 Removing: /var/run/dpdk/spdk_pid62515 00:30:21.557 Removing: /var/run/dpdk/spdk_pid62543 00:30:21.557 Removing: /var/run/dpdk/spdk_pid62719 00:30:21.557 Removing: /var/run/dpdk/spdk_pid62816 00:30:21.557 Removing: /var/run/dpdk/spdk_pid62910 00:30:21.557 Removing: /var/run/dpdk/spdk_pid63018 00:30:21.557 Removing: /var/run/dpdk/spdk_pid63118 00:30:21.557 Removing: /var/run/dpdk/spdk_pid63158 00:30:21.557 Removing: /var/run/dpdk/spdk_pid63200 00:30:21.557 Removing: /var/run/dpdk/spdk_pid63268 00:30:21.557 Removing: /var/run/dpdk/spdk_pid63374 00:30:21.557 Removing: /var/run/dpdk/spdk_pid63834 00:30:21.557 Removing: /var/run/dpdk/spdk_pid63905 00:30:21.557 Removing: /var/run/dpdk/spdk_pid63979 00:30:21.557 Removing: /var/run/dpdk/spdk_pid63995 00:30:21.557 Removing: /var/run/dpdk/spdk_pid64136 00:30:21.557 Removing: /var/run/dpdk/spdk_pid64152 00:30:21.557 Removing: /var/run/dpdk/spdk_pid64290 00:30:21.557 Removing: /var/run/dpdk/spdk_pid64311 00:30:21.557 Removing: /var/run/dpdk/spdk_pid64379 00:30:21.557 Removing: /var/run/dpdk/spdk_pid64398 00:30:21.557 Removing: /var/run/dpdk/spdk_pid64457 00:30:21.557 Removing: /var/run/dpdk/spdk_pid64479 00:30:21.557 Removing: /var/run/dpdk/spdk_pid64662 00:30:21.557 Removing: /var/run/dpdk/spdk_pid64704 00:30:21.557 Removing: /var/run/dpdk/spdk_pid64785 00:30:21.557 Removing: /var/run/dpdk/spdk_pid64855 00:30:21.557 Removing: /var/run/dpdk/spdk_pid64897 00:30:21.557 Removing: /var/run/dpdk/spdk_pid64970 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65016 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65063 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65109 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65156 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65197 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65249 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65290 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65342 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65383 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65430 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65476 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65523 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65564 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65616 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65657 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65704 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65753 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65803 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65849 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65897 00:30:21.557 Removing: /var/run/dpdk/spdk_pid65979 00:30:21.557 Removing: /var/run/dpdk/spdk_pid66095 00:30:21.557 Removing: /var/run/dpdk/spdk_pid66262 00:30:21.557 Removing: /var/run/dpdk/spdk_pid66346 00:30:21.557 Removing: /var/run/dpdk/spdk_pid66394 00:30:21.557 Removing: /var/run/dpdk/spdk_pid66861 00:30:21.557 Removing: /var/run/dpdk/spdk_pid66965 00:30:21.557 Removing: /var/run/dpdk/spdk_pid67074 00:30:21.557 Removing: /var/run/dpdk/spdk_pid67137 00:30:21.557 Removing: /var/run/dpdk/spdk_pid67164 00:30:21.557 Removing: /var/run/dpdk/spdk_pid67240 00:30:21.557 Removing: /var/run/dpdk/spdk_pid67878 00:30:21.557 Removing: /var/run/dpdk/spdk_pid67920 00:30:21.557 Removing: /var/run/dpdk/spdk_pid68432 00:30:21.557 Removing: /var/run/dpdk/spdk_pid68536 00:30:21.557 Removing: /var/run/dpdk/spdk_pid68651 00:30:21.557 Removing: /var/run/dpdk/spdk_pid68708 00:30:21.557 Removing: /var/run/dpdk/spdk_pid68735 00:30:21.557 Removing: /var/run/dpdk/spdk_pid68766 00:30:21.557 Removing: /var/run/dpdk/spdk_pid70616 00:30:21.557 Removing: /var/run/dpdk/spdk_pid70765 00:30:21.557 Removing: /var/run/dpdk/spdk_pid70769 00:30:21.557 Removing: /var/run/dpdk/spdk_pid70781 00:30:21.557 Removing: /var/run/dpdk/spdk_pid70832 00:30:21.557 Removing: /var/run/dpdk/spdk_pid70836 00:30:21.557 Removing: /var/run/dpdk/spdk_pid70848 00:30:21.557 Removing: /var/run/dpdk/spdk_pid70893 00:30:21.557 Removing: /var/run/dpdk/spdk_pid70897 00:30:21.557 Removing: /var/run/dpdk/spdk_pid70909 00:30:21.557 Removing: /var/run/dpdk/spdk_pid70954 00:30:21.557 Removing: /var/run/dpdk/spdk_pid70958 00:30:21.557 Removing: /var/run/dpdk/spdk_pid70970 00:30:21.557 Removing: /var/run/dpdk/spdk_pid72315 00:30:21.557 Removing: /var/run/dpdk/spdk_pid72415 00:30:21.557 Removing: /var/run/dpdk/spdk_pid73806 00:30:21.557 Removing: /var/run/dpdk/spdk_pid75176 00:30:21.557 Removing: /var/run/dpdk/spdk_pid75319 00:30:21.557 Removing: /var/run/dpdk/spdk_pid75441 00:30:21.816 Removing: /var/run/dpdk/spdk_pid75567 00:30:21.816 Removing: /var/run/dpdk/spdk_pid75712 00:30:21.816 Removing: /var/run/dpdk/spdk_pid75792 00:30:21.816 Removing: /var/run/dpdk/spdk_pid75932 00:30:21.816 Removing: /var/run/dpdk/spdk_pid76301 00:30:21.816 Removing: /var/run/dpdk/spdk_pid76339 00:30:21.816 Removing: /var/run/dpdk/spdk_pid76821 00:30:21.816 Removing: /var/run/dpdk/spdk_pid77009 00:30:21.816 Removing: /var/run/dpdk/spdk_pid77108 00:30:21.816 Removing: /var/run/dpdk/spdk_pid77226 00:30:21.816 Removing: /var/run/dpdk/spdk_pid77290 00:30:21.816 Removing: /var/run/dpdk/spdk_pid77317 00:30:21.816 Removing: /var/run/dpdk/spdk_pid77596 00:30:21.816 Removing: /var/run/dpdk/spdk_pid77658 00:30:21.816 Removing: /var/run/dpdk/spdk_pid77736 00:30:21.816 Removing: /var/run/dpdk/spdk_pid78120 00:30:21.816 Removing: /var/run/dpdk/spdk_pid78265 00:30:21.816 Removing: /var/run/dpdk/spdk_pid79036 00:30:21.816 Removing: /var/run/dpdk/spdk_pid79174 00:30:21.816 Removing: /var/run/dpdk/spdk_pid79371 00:30:21.816 Removing: /var/run/dpdk/spdk_pid79468 00:30:21.816 Removing: /var/run/dpdk/spdk_pid79833 00:30:21.816 Removing: /var/run/dpdk/spdk_pid80103 00:30:21.816 Removing: /var/run/dpdk/spdk_pid80461 00:30:21.816 Removing: /var/run/dpdk/spdk_pid80658 00:30:21.816 Removing: /var/run/dpdk/spdk_pid80794 00:30:21.817 Removing: /var/run/dpdk/spdk_pid80858 00:30:21.817 Removing: /var/run/dpdk/spdk_pid81002 00:30:21.817 Removing: /var/run/dpdk/spdk_pid81038 00:30:21.817 Removing: /var/run/dpdk/spdk_pid81091 00:30:21.817 Removing: /var/run/dpdk/spdk_pid81295 00:30:21.817 Removing: /var/run/dpdk/spdk_pid81537 00:30:21.817 Removing: /var/run/dpdk/spdk_pid81952 00:30:21.817 Removing: /var/run/dpdk/spdk_pid82395 00:30:21.817 Removing: /var/run/dpdk/spdk_pid82798 00:30:21.817 Removing: /var/run/dpdk/spdk_pid83312 00:30:21.817 Removing: /var/run/dpdk/spdk_pid83450 00:30:21.817 Removing: /var/run/dpdk/spdk_pid83555 00:30:21.817 Removing: /var/run/dpdk/spdk_pid84229 00:30:21.817 Removing: /var/run/dpdk/spdk_pid84312 00:30:21.817 Removing: /var/run/dpdk/spdk_pid84734 00:30:21.817 Removing: /var/run/dpdk/spdk_pid85160 00:30:21.817 Removing: /var/run/dpdk/spdk_pid85663 00:30:21.817 Removing: /var/run/dpdk/spdk_pid85791 00:30:21.817 Removing: /var/run/dpdk/spdk_pid85844 00:30:21.817 Removing: /var/run/dpdk/spdk_pid85914 00:30:21.817 Removing: /var/run/dpdk/spdk_pid85977 00:30:21.817 Removing: /var/run/dpdk/spdk_pid86048 00:30:21.817 Removing: /var/run/dpdk/spdk_pid86272 00:30:21.817 Removing: /var/run/dpdk/spdk_pid86345 00:30:21.817 Removing: /var/run/dpdk/spdk_pid86426 00:30:21.817 Removing: /var/run/dpdk/spdk_pid86510 00:30:21.817 Removing: /var/run/dpdk/spdk_pid86545 00:30:21.817 Removing: /var/run/dpdk/spdk_pid86618 00:30:21.817 Removing: /var/run/dpdk/spdk_pid86749 00:30:21.817 Clean 00:30:21.817 10:19:11 -- common/autotest_common.sh@1450 -- # return 0 00:30:21.817 10:19:11 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:21.817 10:19:11 -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:21.817 10:19:11 -- common/autotest_common.sh@10 -- # set +x 00:30:21.817 10:19:11 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:21.817 10:19:11 -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:21.817 10:19:11 -- common/autotest_common.sh@10 -- # set +x 00:30:21.817 10:19:11 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:22.075 10:19:11 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:22.075 10:19:11 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:22.075 10:19:11 -- spdk/autotest.sh@391 -- # hash lcov 00:30:22.075 10:19:11 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:22.075 10:19:11 -- spdk/autotest.sh@393 -- # hostname 00:30:22.075 10:19:11 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:22.075 geninfo: WARNING: invalid characters removed from testname! 00:30:54.146 10:19:40 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:55.520 10:19:44 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:58.855 10:19:47 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:01.387 10:19:50 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:03.921 10:19:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:07.206 10:19:56 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:09.740 10:19:59 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:09.740 10:19:59 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:09.740 10:19:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:09.740 10:19:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.740 10:19:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.740 10:19:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.740 10:19:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.740 10:19:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.740 10:19:59 -- paths/export.sh@5 -- $ export PATH 00:31:09.740 10:19:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.740 10:19:59 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:31:09.740 10:19:59 -- common/autobuild_common.sh@437 -- $ date +%s 00:31:09.740 10:19:59 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718014799.XXXXXX 00:31:09.740 10:19:59 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718014799.hnTZt0 00:31:09.740 10:19:59 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:31:09.740 10:19:59 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:31:09.740 10:19:59 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:31:09.740 10:19:59 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:31:09.740 10:19:59 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:31:09.740 10:19:59 -- common/autobuild_common.sh@453 -- $ get_config_params 00:31:09.740 10:19:59 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:31:09.740 10:19:59 -- common/autotest_common.sh@10 -- $ set +x 00:31:09.740 10:19:59 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:31:09.740 10:19:59 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:31:09.740 10:19:59 -- pm/common@17 -- $ local monitor 00:31:09.740 10:19:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:09.740 10:19:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:09.740 10:19:59 -- pm/common@21 -- $ date +%s 00:31:09.740 10:19:59 -- pm/common@25 -- $ sleep 1 00:31:09.740 10:19:59 -- pm/common@21 -- $ date +%s 00:31:09.740 10:19:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1718014799 00:31:09.740 10:19:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1718014799 00:31:09.740 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1718014799_collect-vmstat.pm.log 00:31:09.740 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1718014799_collect-cpu-load.pm.log 00:31:10.673 10:20:00 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:31:10.673 10:20:00 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:31:10.673 10:20:00 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:31:10.673 10:20:00 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:10.673 10:20:00 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:10.673 10:20:00 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:10.673 10:20:00 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:10.673 10:20:00 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:10.673 10:20:00 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:10.673 10:20:00 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:10.673 10:20:00 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:10.673 10:20:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:10.673 10:20:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:10.673 10:20:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:10.673 10:20:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:31:10.673 10:20:00 -- pm/common@44 -- $ pid=88449 00:31:10.673 10:20:00 -- pm/common@50 -- $ kill -TERM 88449 00:31:10.673 10:20:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:10.673 10:20:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:31:10.673 10:20:00 -- pm/common@44 -- $ pid=88451 00:31:10.673 10:20:00 -- pm/common@50 -- $ kill -TERM 88451 00:31:10.673 + [[ -n 5203 ]] 00:31:10.673 + sudo kill 5203 00:31:10.936 [Pipeline] } 00:31:10.951 [Pipeline] // timeout 00:31:10.957 [Pipeline] } 00:31:10.973 [Pipeline] // stage 00:31:10.977 [Pipeline] } 00:31:10.991 [Pipeline] // catchError 00:31:10.998 [Pipeline] stage 00:31:10.999 [Pipeline] { (Stop VM) 00:31:11.012 [Pipeline] sh 00:31:11.410 + vagrant halt 00:31:15.596 ==> default: Halting domain... 00:31:22.174 [Pipeline] sh 00:31:22.456 + vagrant destroy -f 00:31:26.648 ==> default: Removing domain... 00:31:26.662 [Pipeline] sh 00:31:26.942 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:31:26.952 [Pipeline] } 00:31:26.972 [Pipeline] // stage 00:31:26.979 [Pipeline] } 00:31:26.996 [Pipeline] // dir 00:31:27.003 [Pipeline] } 00:31:27.022 [Pipeline] // wrap 00:31:27.028 [Pipeline] } 00:31:27.045 [Pipeline] // catchError 00:31:27.056 [Pipeline] stage 00:31:27.059 [Pipeline] { (Epilogue) 00:31:27.076 [Pipeline] sh 00:31:27.359 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:33.937 [Pipeline] catchError 00:31:33.939 [Pipeline] { 00:31:33.955 [Pipeline] sh 00:31:34.238 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:34.497 Artifacts sizes are good 00:31:34.506 [Pipeline] } 00:31:34.526 [Pipeline] // catchError 00:31:34.537 [Pipeline] archiveArtifacts 00:31:34.543 Archiving artifacts 00:31:34.695 [Pipeline] cleanWs 00:31:34.706 [WS-CLEANUP] Deleting project workspace... 00:31:34.706 [WS-CLEANUP] Deferred wipeout is used... 00:31:34.713 [WS-CLEANUP] done 00:31:34.715 [Pipeline] } 00:31:34.732 [Pipeline] // stage 00:31:34.738 [Pipeline] } 00:31:34.755 [Pipeline] // node 00:31:34.761 [Pipeline] End of Pipeline 00:31:34.800 Finished: SUCCESS